Dear Sir/Madam,
I am trying to implement ready boost feature in LINUX for my final year undergraduate project.I was just researching and I found out that whenever a page fault occurs the CPU sends Interrupt 14.So, I need your guidance on the foll scheme I am thinking of:
I will create an interrupt handler which will be activated when an interrupt occurs.
This handler can extract the linear address of the fault from cr2 register and we can use LINUX page table to get the physical address.
Do you think that this ,will be a feasible scheme?
Also any tutorial on the same will be highly appreciated.
Thanks to all in advance.
_Regards
Isn't "ReadyBoost" simply implemented by running mkswap followed by swapon on the /dev/sd* device special file for the flash disk? As far as I'm aware, all the necessary kernel-side support is in place.
We're not going to do your assignment for you.
IIUI ReadyBoost is not the same as swap #caf. It is about caching disk content on a faster medium to make random disk accesses faster. Linux will never page disk-backed pages to swap, they will just be dropped and reread from the disk. Only anonymous pages go to swap.
Also ReadyBoost data is mirrored to disk, so the USB drive can be removed at any time, and also encrypted so if the key is removed and analysed on another system nothing is disclosed.
So #R-The_Master you could implement something like ReadyBoost for Linux. But it has basically nothing to do with int 14.
Related
I am currently working with the Xilinx XDMA driver (see here for source code: XDMA Source), and am attempting to get it to run (before you ask: I have contacted my technical support point of contact and the Xilinx forum is riddled with people having the same issue). However, I may have found a snag in Xilinx's code that might be a deal breaker for me. I am hoping there is something that I'm not considering.
First off, there are two primary modes of the driver, AXI-Memory Mapped (AXI-MM) and AXI-Streaming (AXI-ST). For my particular application, I require AXI-ST, since data will continuously be flowing from the device.
The driver is written to take advantage of scatter-gather lists. In AXI-MM mode, this works because reads are rather random events (i.e., there isn't a flow of data out of the device, instead the userspace application simply requests data when it needs to). As such, the DMA transfer is built up, the data is transfered, and the transfer is then torn down. This is a combination of get_user_pages(), pci_map_sg(), and pci_unmap_sg().
For AXI-ST, things get weird, and the source code is far from orthodox. The driver allocates a circular buffer where the data is meant to continuously flow into. This buffer is generally sized to be somewhat large (mine is set on the order of 32MB), since you want to be able to handle transient events where the userspace application forgot about the driver and can then later work off the incoming data.
Here's where things get wonky... the circular buffer is allocated using vmalloc32() and the pages from that allocation are mapped in the same way as the userspace buffer is in AXI-MM mode (i.e., using the pci_map_sg() interface). As a result, because the circular buffer is shared between the device and CPU, every read() call requires me to call pci_dma_sync_sg_for_cpu() and pci_dma_sync_sg_for_device(), which absolutely destroys my performance (I can not keep up with the device!), since this works on the entire buffer. Funny enough, Xilinx never included these sync calls in their code, so I first knew I had a problem when I edited their test script to attempt more than one DMA transfer before exiting and the resulting data buffer was corrupted.
As a result, I'm wondering how I can fix this. I've considered rewriting the code to build up my own buffer allocated using pci_alloc_consistent()/dma_alloc_coherent(), but this is easier said than done. Namely, the code is architected to assume using scatter-gather lists everywhere (there appears to be a strange, proprietary mapping between the scatter-gather list and the memory descriptors that the FPGA understands).
Are there any other API calls I should be made aware of? Can I use the "single" variants (i.e., pci dma_sync_single_for_cpu()) via some translation mechanism to not sync the entire buffer? Alternatively, is there perhaps some function that can make the circular buffer allocated with vmalloc() coherent?
Alright, I figured it out.
Basically, my assumptions and/or understanding of the kernel documentation regarding the sync API were totally incorrect. Namely, I was wrong on two key assumptions:
If the buffer is never written to by the CPU, you don't need to sync for the device. Removing this call doubled my read() throughput.
You don't need to sync the entire scatterlist. Instead, now in my read() call, I figure out what pages are going to be affected by the copy_to_user() call (i.e., what is going to be copied out of the circular buffer) and only sync those pages that I care about. Basically, I can call something like pci_dma_sync_sg_for_cpu(lro->pci_dev, &transfer->sgm->sgl[sgl_index], pages_to_sync, DMA_FROM_DEVICE) where sgl_index is where I figured the copy will start and pages_to_sync is how large the data is in number of pages.
With the above two changes my code now meets my throughput requirements.
I think XDMA was originally written for x86, in which case the sync functions do nothing.
It does not seem likely that you can use the single sync variants unless you modify the circular buffer. Replacing the circular buffer with a list of buffers to send seems like a good idea to me. You pre-allocate a number of such buffers and have a list of buffers to send and a free list for your app to reuse.
If you're using a Zynq FPGA, you could connect the DMA engine to the ACP port so that FPGA memory access will be coherent. Alternatively, you can map the memory regions as uncached/buffered instead of cached.
Finally, in my FPGA applications, I map the control registers and buffers into the application process and only implement mmap() and poll() in the driver, to give apps more flexibility in how they do DMA. I generally implement my own DMA engines.
Pete, I am the original developer of the driver code (before the X of XMDA came into place).
The ringbuffer was always an unorthodox thing and indeed meant for cache-coherent systems and disabled by default. It's initial purpose was to get rid of the DMA (re)start latency; even with full asynchronous I/O support (even with zero-latency descriptor chaining in some cases) we had use cases where this could not be guaranteed, and where a true hardware ringbuffer/cyclic/loop mode was required.
There is no equivalent to a ringbuffer API in Linux, so it's open-coded a bit.
I am happy to re-think the IP/driver design.
Can you share your fix?
I'm creating a web application running on a Linux server. The application is constantly accessing a 250K file - it loads it in memory, reads it and sends back some info to the user. Since this file is read all the time, my client is suggesting to use something like memcache to cache it to memory, presumably because it will make read operations faster.
However, I'm thinking that the Linux filesystem is probably already caching the file in memory since it's accessed frequently. Is that right? In your opinion, would memcache provide a real improvement? Or is it going to do the same thing that Linux is already doing?
I'm not really familiar with neither Linux nor memcache, so I would really appreciate if someone could clarify this.
Yes, if you do not modify the file each time you open it.
Linux will hold the file's information in copy-on-write pages in memory, and "loading" the file into memory should be very fast (page table swap at worst).
Edit: Though, as cdhowie points out, there is no 'linux filesystem'. However, I believe the relevant code is in linux's memory management, and is therefore independent of the filesystem in question. If you're curious, you can read in the linux source about handling vm_area_struct objects in linux/mm/mmap.c, mainly.
As people have mentioned, mmap is a good solution here.
But, one 250k file is very small. You might want to read it in and put it in some sort of memory structure that matches what you want to send back to the user on startup. Ie, if it is a text file an array of lines might be a good choice, etc.
The file should be cached, but make sure the noatime option is set on the mount, otherwise the access time will attempt to be saved to the file, invalidating the cache.
Yes, definitely. It will keep accessed files in memory indefinitely, unless something else needs the memory.
You can control this behaviour (to some extent) with the fadvise system call. See its "man" page for more details.
A read/write system call will still normally need to copy the data, so if you see a real bottleneck doing this, consider using mmap() which can avoid the copy, by mapping the cache pages directly into the process.
I guess putting that file into ramdisk (tmpfs) may make enough advantage without big modifications. Unless you are really serious about response time in microseconds unit.
I was assigned to write a system call for Linux kernel, which oddly determines (and reduces) users´ maximum transfer amount per minute (for file operations). This system call will be called lim_fs_usage and will take a parameter for maximum number of bytes all users can access in a minute. For short, I am going to determine bandwidth of all filesystem operations in Linux. The project also asks for choosing appropriate method for distribution of this restricted resource (file access) among the users but I think this
won´t be a big problem.
I did a long long search and scan but could not find a method for managing file system access programmatically. I thought of mapping (mmap())hard drive to memory and manage memory operations but this turned to be useless. I also tried to find an API for virtual file system in order to monitor and limit it but I could not find one. Any ideas, please... Any help is greatly appreciated. Thank you in advance...
I wonder if you could do this as an IO scheduler implementation.
The main difficulty of doing IO bandwidth limitation under Linux is, by the time it reaches anywhere near the device, the kernel has probably long since forgotten who caused it.
Likewise, you can get on some very tricky ground in determining who is responsible for a given piece of IO:
If a binary is demand-loaded, who owns the IO doing that?
A mapped section of memory (demand-loaded executable or otherwise) might be kicked out of memory because someone else used too much ram, thus causing the kernel to choose to evict those pages, which places an unfair burden on the quota of the other user to then page it back in
IO operations can be combined, and might come from different users
A write operation might cause an IO sooner or later depending on how the kernel schedules it; a later schedule may mean that fewer IOs need to be done in the long run, as another write gets done to the same block in the interim; writing to an already dirty block in cache does not make it any dirtier.
If you understand all these and more caveats, and still want to, I imagine doing it as an IO scheduler is the way to go.
IO schedulers are pluggable under Linux (2.6) and can be changed dynamically - the kernel waits for all IO on the device (IO scheduler is switchable per block device) to end and then switches to the new one.
Since it's urgent I'll give you an idea out of the top of my head without doing any research on the feasibility -- what about inserting a hook to monitor system calls that deal with file system access?
You might end up writing specialised kernel modules to handle the various filesystems (ext3, ext4, etc) but as a proof-of-concept you can start with one. Do not forget that root has reserved blocks in memory, process space and disk for his own operations.
Managing memory operations does not sound related to what you're trying to do (but perhaps I am mistaken here).
After a long period of thinking and searching, I decided to use the ¨hooking¨ method proposed. I am thinking of creating a new system call which initializes and manages a global variable like hdd_ bandwith _limit. This variable will be used in Read() and Write() system calls´ modified implementation (instead of ¨count¨ variable). Then I will decide distribution of this resource which is the real issue. Probably I will find out how many users are using the system for a certain moment and divide this resource equally. Will be a Round-Robin-like distribution. But still, I am open to suggestions on this distribution issue. Will it be a SJF or FCFS or Round-Robin? Synchronization is another issue. How can I know a user´s job is short or long? Or whether he is done with the operation or not?
If I start copying a huge file tree from one position to another or if some other process starts doing lots of disk activity, the foreground app (GUI) slows way down. For example, take a 2gb file tree with 100k files in it. Open a console and do cp -r bigtree bigtree2. Then go to firefox and start browsing. Firefox is almost unusable. Even if I set firefox's nice level to really high priority (-20), it's still super slow with huge delays.
I remember some years ago when I worked on a Solaris box, the system behaved much better in similar circumstances.
My HD is using DMA, not PIO. It's SATA. Not mounted with the atime flag.
Linux has long had a problem with programs that hog all the system's "dirty" cache memory. What is happening is that the copy process is filling the write cache with the file data it is copying and it is doing it very quickly. So when Firefox comes along and needs to write it must first wait for dirty buffer space or an available disk queue write slot. While waiting it is competing with the copy process and the kernel's pdflush thread, which moves data from dirty buffers to the disk write queue.
Firefox has yet another problem in this scenario. It uses SQLite to store its bookmarks, history and other things. SQLite is a ACID compliant database and it uses a transaction system with its disk writes flushed to disk. So not only does it have to wait for buffer space, it must wait for the disk queue, which is full of copied file, to clear out before it can acknowledge a successful write.
There has been a lot of tweaking done to the Linux disk queuing and buffering system. There are changes in almost every kernel release. Try one of the newer releases. You can also try tweaking the sysctl values. I sort of like these:
vm.dirty_writeback_centisecs = 100
vm.dirty_expire_centisecs = 9000
vm.dirty_background_ratio = 4
vm.dirty_ratio = 80
You can also try tweaking the number of slots in the disk queue. This value is in /sys/block/sda/queue/nr_requests. You need to substitute sda with whatever your drive really is. More slots means more chances to merge IO requests and the CFQ IO scheduler can do a better job with priorities. Fewer slots usually means a shorter wait to get written to disk for synchronous IO like SQLite's transactions. Fewer slots also means a shorter wait to get read IO into the disk queue if a write-heavy process completely stuffs the queue with write IO.
Try ionice-ing or nice-ing the copy process. The issue is due to the fact that IO gets the same priority as the GUI, which for a desktop, affects perceived responsiveness.
There's an Ubuntu brainstorm about this currently.
You're not the first to notice this problem. Former kernel developer [Con Kolivas] (http://en.wikipedia.org/wiki/Con_Kolivas) found that a lot of companies are paying to improve linux server performance at the expense of desktop performance. Con had an impressive set of patches for making the desktop more responsive. Unfortunately there was some sort of code war and eventually Con dropped out.
I would love to know how to petition the Linux kernel developers for better desktop performance. In the meantime, if you are willing to run kernel 2.6.22, you can run with the -ck patch set.
Make sure that DMA is enabled on all your drives that support it. Depending on your distribution this may not be the default. Read man hdparm, and look into your systems init mechanism.
Under Linux, how can I tell what specific process owns / is using a given address in physical memory?
I understand that this may require writing a kernel module to access some kernel data structure and return the results to a user - I need to know how it can be done, regardless of how complicated it is.
The pages in use by a process and their location in physical memory are not static pieces of information. However, the information you seek should be in the page tables. A change went into the kernel that might be almost exactly what you're looking for:
author Arjan van de Ven <arjan#linux.intel.com> 2008-04-17 15:40:45 (GMT)
committer Ingo Molnar <mingo#elte.hu> 2008-04-17 15:40:45 (GMT)
commit 926e5392ba8a388ae32ca0d2714cc2c73945c609 (patch)
tree 2718b50b8b66a3614f47d3246b080ee8511b299e
parent 2596e0fae094be9354b29ddb17e6326a18012e8c (diff)
x86: add code to dump the (kernel) page tables for visual inspection by kernel developers
This patch adds code to the kernel to have an (optional)
/proc/kernel_page_tables debug file that basically dumps the kernel
pagetables; this allows us kernel developers to verify that nothing
fishy is going on and that the various mappings are set up correctly.
This was quite useful in finding various change_page_attr() bugs, and
is very likely to be useful in the future as well.
Signed-off-by:Arjan van de Ven <arjan#linux.intel.com>
Cc: mingo#elte.hu
Cc: tglx#tglx.de
Cc: hpa#zytor.com
Signed-off-by: Ingo Molnar <mingo#elte.hu>
Signed-off-by: Thomas Gleixner <tglx#linutronix.de>
The added functionality is enabled by a new config option (X86_PTDUMP).
Might want to start here for a discusson of how process virtual memory is mapped to physical memory. That would give you a good place to start as far as figuring out where you would need to hook into the kernel to access the page table, etc. where that information is stored.
Well due to the way things are done under Linux, a process may own memory at one instance, and then will not anymore, due to paging.
http://en.wikipedia.org/wiki/Paging
Essentially this means that the computer switches out data it doesn't need at one moment so that the memory can be used for something else.
I'm not sure if this helped or not, but I'd advise you to look at page tables and directories, since you can use these to translate to physical addresses.
You might be able to use pmap -d [pid] for this... unfortunately you'd have to run it on all processes to see which one returned a result for the given memory address. Certainly not as efficient as a kernel module (and you might not even get a result, if the memory is paged out while you're looking for it).