linux kernel output in a file - linux

I am using a program by using the linux kernel (in this case a predictor for protein localization). The output/results are printed in the linux kernel, one after each other. However, if I want to copy it to a simple textfile, the "length" of the kernel is not long enough for all the results.
instead of using smaller seperate files, I would like to print the output of the kernel to a file. I tried to google this, but it doesn't really help me futher.
1. dmesg seems to be for system-output stuff?
2. the /var/log/syslog.txt doesn't show the stuff I need, but other technical kernel stuff.
3. i saw something with printf(), but didn't quite understand the mechanics and if it was useable for my problem.
could someone explain how to do this or where to look for the right info?

I think i found out how to do it, by using > fileToBeNamed.txt at the end of the command, Sorry :(

Related

How can I change Linux terminal to make it look like the command "top"?

My aim is to make a program using C or C++ that prints to the Linux console in a similar way "top" does, in the sense that top's content updates and changes the existing text in the console, rather than printing new lines. How? I only want to know what are the syscalls or functions that make this possible. A little example would be very much appreciated.

How are low level device drivers written for Linux?

I remember reading some books about Linux Device drivers around the end of my university education for Comp. Science. Soon there-after I got a job and haven't really worked much with Linux/Embedded (I do mostly java stuff now). However it's something I want to look into.
Anyways I recall reading an online article (ill edit post if I can find it) about writing a USB Driver for Linux for a Little "USB Missile Turret" similar to this:
http://www.thinkgeek.com/geektoys/warfare/8a0f/
Anyways it went into detail about how to write the driver without a driver being provided (the guy just found a generic one on ebay....and figured out how to like...write the driver just by looking at the components and such). ANYWAYS it was pretty amazing.
I have a pretty good clue about how low level embedded stuff works, but thats for stuff like AVR's/PIC Microcontrollers, I have no idea how something like this would be written for like a Normal processor in a PC.
Anyways I guess what Im asking is.....how do you figure out this kinda stuff, where would I find such information.
edit: found the link
http://matthias.vallentin.net/blog/2007/04/writing-a-linux-kernel-driver-for-an-unknown-usb-device/
(It's way more confusing then I thought, I didn't realize he reverse engineered a Windows USB driver..im guessing it'd be impossible to figure out without snooping through a windows driver?)
The Linux kernel and drivers are GDFL source. You can read the code, change them, compile them, and experiment to your heart's delight with them. That is a pretty good way to learn.

How can I read the VESA/VideoBIOS "Mode Removal Table"?

Many sites and articles on getting widescreen monitors to work on notebooks in their native resolution mention something called the "Mode Removal Table" in the Video BIOS which specifically prevents certain video modes:
http://www.avsforum.com/avs-vb/showthread.php?t=947830
http://software.intel.com/en-us/forums/showthread.php?t=61326
http://forum.notebookreview.com/dell-xps-studio-xps/313573-xps-m1330-hdmi-hdmi-tv-issue-2.html
http://forums.entechtaiwan.com/index.php?action=printpage;topic=3363.0
Does such a thing really exist? The fix worked for me but I wanted to find out if I can read, modify, or work around this table. However I can't find any mention of it in the various VESA standards. Perhaps it actually goes by some other more cryptic name?
“Many sites and articles”? The first couple of dozen results are from you, and most of the rest are from that Intel article you mentioned or other people linking to that article.
You could always try asking someone who talks as though they know how to do it. There's another thread that discusses it—though it too has no information on the table, only a quick mention of it.
There does not seem to be any currently known way to read the GMA video BIOS. You would have to dump the BIOS and reverse-engineer it to figure out where the table is and how to interpret it. Unfortunately, even extracting it is difficult since nobody seems to have had enough interest in creating a tool to automate it. Looks, like you’ve got even more reversing to do. (Techincally, because the GMA is an integrated graphics-adapter, you'll need to extract the video BIOS from the system BIOS, then extract the table.)

New to Linux Kernel/Driver development

Recently, i began developing a driver of an embedded device running linux.
Until now i have only read about linux internals.
Having no prior experience in driver devlopment, i am finding it a tad difficult to land my first step.
I have downloaded the kernel source-code (v2.6.32).
I have read (skimped) Linux Device Drivers (3e)
I read a few related posts here on StackOverflow.
I understand that linux has a "monolithic" approach.
I have built kernel (included existing driver in menuconfig etc.)
I know the basics of kconfig and makefile files so that should not be a problem.
Can someone describe the structure (i.e. the inter-links)
of the various directories in the kernel-source code.
In other words, given a source-code file,
which other files would it refer to for related code
(The "#include"-s provide a partial idea)
Could someone please help me in getting a better idea?
Any help will be greatly appreciated
Thank You.
Given a C file, you have to look at the functions it calls and data structures it uses, rather than worrying about particular files.
There are two basic routes to developing your own device driver:
Take a driver that is similar to yours; strip out the code that isn't applicable to your device, and fill in new code for your device.
Start with the very basic pieces of a device driver, and add pieces a little at a time until your device begins to function.
The files that compose your driver will make more sense as you complete this process. Do consider what belongs in each file, but to some extent, dividing a driver among files is more an art than a science. Smaller drivers often fit into just one or two files.
A bit of design may also be good. Consider what you device does, and what your driver will need to do. Based on that, you should be able to map out what functions a device driver will need to have.
I also believe Linux Device Drivers, Third Edition may help you get on your way to driver development.
Linux files themselves include files based on what they do, what layer they are in, and what layer they access of the call stack. The Big Picture truly informs how each file is related to the next.
I had to fix a kernel driver once. My biggest tip (if you use vim) is to set it up with ctags so you can jump around the kernel source with ctrl-] every time you see a function you don't understand.

Framebuffer Documentation

Is there any documentation on how to write software that uses the framebuffer device in Linux? I've seen a couple simple examples that basically say: "open it, mmap it, write pixels to mapped area." But no comprehensive documentation on how to use the different IOCTLS for it anything. I've seen references to "panning" and other capabilities but "googling it" gives way too many hits of useless information.
Edit:
Is the only documentation from a programming standpoint, not a "User's howto configure your system to use the fb," documentation the code?
You could have a look at fbi's source code, an image viewer which uses the linux framebuffer. You can get it here : http://linux.bytesex.org/fbida/
-- It appears there might not be too many options possible to programming with the fb from user space on a desktop beyond what you mentioned. This might be one reason why some of the docs are so old. Look at this howto for device driver writers and which is referenced from some official linux docs: www.linux-fbdev.org [slash] HOWTO [slash] index.html . It does not reference too many interfaces.. although looking at the linux source tree does offer larger code examples.
-- opentom.org [slash] Hardware_Framebuffer is not for a desktop environment. It reinforces the main methodology, but it does seem to avoid explaining all the ingredients necessary to doing the "fast" double buffer switching it mentions. Another one for a different device and which leaves some key buffering details out is wiki.gp2x.org [slash] wiki [slash] Writing_to_the_framebuffer_device , although it does at least suggest you might be able use fb1 and fb0 to engage double buffering (on this device.. though for desktop, fb1 may not be possible or it may access different hardware), that using volatile keyword might be appropriate, and that we should pay attention to the vsync.
-- asm.sourceforge.net [slash] articles [slash] fb.html assembly language routines that also appear (?) to just do the basics of querying, opening, setting a few basics, mmap, drawing pixel values to storage, and copying over to the fb memory (making sure to use a short stosb loop, I suppose, rather than some longer approach).
-- Beware of 16 bpp comments when googling Linux frame buffer: I used fbgrab and fb2png during an X session to no avail. These each rendered an image that suggested a snapshot of my desktop screen as if the picture of the desktop had been taken using a very bad camera, underwater, and then overexposed in a dark room. The image was completely broken in color, size, and missing much detail (dotted all over with pixel colors that didn't belong). It seems that /proc /sys on the computer I used (new kernel with at most minor modifications.. from a PCLOS derivative) claim that fb0 uses 16 bpp, and most things I googled stated something along those lines, but experiments lead me to a very different conclusion. Besides the results of these two failures from standard frame buffer grab utilities (for the versions held by this distro) that may have assumed 16 bits, I had a different successful test result treating frame buffer pixel data as 32 bits. I created a file from data pulled in via cat /dev/fb0. The file's size ended up being 1920000. I then wrote a small C program to try and manipulate that data (under the assumption it was pixel data in some encoding or other). I nailed it eventually, and the pixel format matched exactly what I had gotten from X when queried (TrueColor RGB 8 bits, no alpha but padded to 32 bits). Notice another clue: my screen resolution of 800x600 times 4 bytes gives 1920000 exactly. The 16 bit approaches I tried initially all produced a similar broken image to fbgrap, so it's not like if I may not have been looking at the right data. [Let me know if you want the code I used to test the data. Basically I just read in the entire fb0 dump and then spit it back out to file, after adding a header "P6\n800 600\n255\n" that creates the suitable ppm file, and while looping over all the pixels manipulating their order or expanding them,.. with the end successful result for me being to drop every 4th byte and switch the first and third in every 4 byte unit. In short, I turned the apparent BGRA fb0 dump into a ppm RGB file. ppm can be viewed with many pic viewers on Linux.]
-- You may want to reconsider the reasons for wanting to program using fb0 (this might also account for why few examples exist). You may not achieve any worthwhile performance gains over X (this was my, if limited, experience) while giving up benefits of using X. This reason might also account for why few code examples exist.
-- Note that DirectFB is not fb. DirectFB has of late gotten more love than the older fb, as it is more focused on the sexier 3d hw accel. If you want to render to a desktop screen as fast as possible without leveraging 3d hardware accel (or even 2d hw accel), then fb might be fine but won't give you anything much that X doesn't give you. X apparently uses fb, and the overhead is likely negligible compared to other costs your program will likely have (don't call X in any tight loop, but instead at the end once you have set up all the pixels for the frame). On the other hand, it can be neat to play around with fb as covered in this comment: Paint Pixels to Screen via Linux FrameBuffer
Check for MPlayer sources.
Under the /libvo directory there are a lot of Video Output plugins used by Mplayer to display multimedia. There you can find the fbdev (vo_fbdev* sources) plugin which uses the Linux frame buffer.
There are a lot of ioctl calls, with the following codes:
FBIOGET_VSCREENINFO
FBIOPUT_VSCREENINFO
FBIOGET_FSCREENINFO
FBIOGETCMAP
FBIOPUTCMAP
FBIOPAN_DISPLAY
It's not like a good documentation, but this is surely a good application implementation.
Look at source code of any of: fbxat,fbida, fbterm, fbtv, directFB library, libxineliboutput-fbe, ppmtofb, xserver-fbdev all are debian packages apps. Just apt-get source from debian libraries. there are many others...
hint: search for framebuffer in package description using your favorite package manager.
ok, even if reading the code is sometimes called "Guru documentation" it can be a bit too much to actually do it.
The source to any splash screen (i.e. during booting) should give you a good start.

Resources