Linux device driver development: what does it look like in 2012? [closed] - linux

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am running Ubuntu 12.04 and I assume that all the items I see listed when I run ls /dev are actually the device drivers for all the devices/hardware components connected to (or able to connect to) my machine. Is this correct? If not, where does Linux store all the device drivers?
What are drivers written in, C? C++? Assembler? What modern IDE/tech stack do device driver developers use?

No, you are not correct. /dev is a folder full of special device files, which represent device drivers. So when I do something to /dev/sda I am not working with the file of the SATA driver, but rather an interface to whatever SATA driver happens to be loaded. Device files are how drivers expose their devices to userspace (along with system calls that call drivers).
Usually drivers are stored in /lib/modules.
Drivers are written in C, unless you want to tripple your workload and write in assembly. There isn't a single line of C++ in the entire Linux kernel, for technical and political reasons (Linus Torvalds hates the sight of it).
IDE? I doubt any kernel developers use IDEs. Most of them just use Vim or EMACS. Then git to commit to the kernel source, and GDB/KDB for debugging. And whatever other command line tools are needed (eg. diff).

Related

How and how hard would it be to create DirectX vendor drivers for Linux? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
As seen on this thread, it seems that the missing part to be able to run DirectX on Linux natively are vendor drivers.
What exactly are vendor driver? Are they drivers interfacing a specific model of a component, or a family, or even any of them? What are they coded in? ASM and C most likely?
How would someone (or a team) create these drivers for Linux? How would it be integrated into Linux? Would the games or applications in general made for Windows and using DirectX need any tweak for Linux? Would companies making games build their games for Linux knowing they can be used without or with only a few tweaks needed?
How hard would it be to make these drivers? How long would it take? Would it require any specific knowledge?
I know this makes a lot of questions, but I'm very curious about that and why no big groups have ever worked on that seriously (even though there must be a good reason).
Thank you a lot in advance for your answers!
EDIT: This is by no means an incitation to a debate of, for example, OpenGL vs DirectX, or Windows vs Linux. By reading the FAQ, I can't really see why this thread isn't constructive as it asks for pretty well-aimed questions which should be answerable quickly.
IMHO the main reason no one really bothers into dealing with directX is based on the fact that there already is a graphics library (mesa in the special case of Linux) available that fully supports any desired graphics operation also available with DirectX.
In contrast to following DirectX, which is a specification based on so called intellectual property owned by a single corparation the API used by this library called openGL is an open standard agreed upon by a consortium of hardware manufacturers.
Different from the philosophy of constraining it's use to just one operating system possibly trying to shackle its users to the one and only platform openGL was intended as a platform independent API right from the beginning.
Following this principle in contrast to DirectX being available just on one single platform openGL is available on any computing platform ranging from android based systems, Mac and numerous other UNIXoid systems including Linux even to Windows machines.
Using any other API than openGL would break this platform independence, which probably wouldn't be received as a progress but rather as a regression.
To sum it up possibly the main reasons to favor openGL over DirectX are the following:
openGL is an open standard while DirectX is proprietary
openGL is available on any platform DirectX is only available on a single platform
any operation supported by DirectX is supported by openGL as well
if they are really needed DirectX calls can be provided by a wrapper library pushing operations down to openGL as for example done in WINE
Mere availablity of a DirectX library implementation alone wouldn't enable any binary code designed for the Windows platform to run at all as the whole set of system libraries and infrastructure still would not be available at all. As a matter of fact even the binary format in use PE/COFF on Windows ELF on Linux is different.
An effort to supply a whole compatibility layer including needed system libraries is already on the way. As already mentioned above it goes by the name of WINE. (see: http://www.winehq.org/)
I hope I gave you some good reasons why no one ever tried (or will try) as you requested.

Want to learn Linux porting on an ARM platform [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am looking to learn porting various flavors of Linux on ARM boards. I was about to buy TI's Panda board or Beagle board. I want to learn customizing Linux source code, compile it and port on one of these boards.
I was just curious if there are any other boards with a good community support than the TI ones that will be good for beginners.
Some of the other options I could find in the Internet are:
Snapdragon 8x60 mobile platform with Android
i.MX31 product development kit (very expensive)
Tegra 250 development board
First, building a kernel by yourself is really hard work, and build an embedded kernel is much much difficult. Maybe you can try to play with some prebuild kernel images, and then try to configure it by your own.
I have a BeagleBoard and at first I used these Ubuntu ARM ports, the link number 3 have the kernel image (you can install USB and Wi-Fi support really easy):
Install Ubuntu-ARM on the BeagleBoard
Ubuntu ARM ports
Ubuntu ARM kernel images
Or maybe, if you like Debian, here is some information about the ARM port installation:
Install embedded Debian on the BeagleBoard
ELinux BeagleBoard, embedded Debian information
Or if you know how to configure and build a kernel, or maybe if you have a little of Gentoo experience, you can test this:
Gentoo manual and kernel image on the BeagleBoard
Gentoo cross-development page information
Gentoo ARM handbook
Gentoo ARM port overlay (Git)
And if you have a PandaBoard, this guy have a lot of documentation on it:
Gentoo PandaBoard install howto
Gentoo ARM files and information
Gentoo PandaBoard files and information
Check the BeagleBoard wiki page and eLinux page, they have a lot of documentation about the board, NAND configuration, Linux distributions, devices, etc.
BeagleBoard page
eLinux BeagleBoard information
Or you can play with QEMU and configure an ARM virtual machine.

Cygwin as native 64-bit in the future? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Does anyone know if there will ever be a true 64-bit version of Cygwin? The FAQ says "as far as we know nobody is working on a 64-bit version" or something like that. Is cygwin forever to be a 32-bit application (or family of apps if you prefer)?
A 64-bit version would be nice. For the most part I can do what I need with the 32-bit version of cygwin on 64-bit windows. But every now and then a 64-bit program I launch from cygwin will recognize the fact that it was launched by a 32-bit parent and behave incorrectly, or not run at all. I must open a cmd.exe or powershell session to run these few commands. One example you can reproduce for yourelf on Windows 2003 64-bit with IIS installed is to run the following command from cygwin then from a cmd.exe that was not opened from within cygwin. (Double backslashes obviously aren't necessary in cmd.exe, but they work ok in both shells.)
cscript c:\\windows\\system32\\iisApp.vbs
So, I can live with opening a cmd.exe session when I need to run something that behaves this way. But being a huge fan of Cygwin I would really like to see an indication that someday someone will produce a 64-bit version.
Probably coincidence, but shortly after this question was posted, there was a large thread with the Cygwin developers discussing 64-bit here:
http://thread.gmane.org/gmane.os.cygwin.devel/233/focus=247
TL;DR - They are in fact thinking about 64-bit Cygwin, but the porting issues are complex...
You'll need to see a clairvoyant to get a defininitive answer to your question, but here goes anyway.
A 64-bit Cygwin is certainly possible, but it would require a lot of work. That involves not only adapting the Cygwin DLL, which probably contains many 32-bit assumptions, but also the porting of all the packages in the distro. My guess is that this will happen when 64-bit Windows becomes so widespread that developing the 32-bit version is no longer worthwhile, so as to avoid splitting the Cygwin project's rather limited resources.

What OSes can I use if I want to use Intel Atom based board as an embedded system? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Im planning to use Intel atom on a board for an embedded system. The embedded system will be running programs written in C for image processing. Since its an embedded system footprint is obviously a concern. I was thinking about using a modified version of the linux kernel. Any other options??
I've written my own O/S for embedded systems so I'm not too sure. But one project I've been wanting to try is uCLinux. Though that might not be enough for what you want to do. If you have more ressources you might want PuppyLinux or Damn Small Linux. They all should have a C compiler which will suit your need.
Hope this helps!
p.s. since I'm a new user, I can only post one hyperlink, you'll have to google the other two, sorry!
I don't know how much memory you have, but Windows CE might be another choice. Going this route lets you stay with Windows tools (if you like those) There is also a Micro edition of the .NET framework available for use on Windows CE
It depends what services you need form your OS. The smallest footprint will be achieved by using a simple RTOS kernel such as uC/OS-II or FreeRTOS; however support for devices and filesystems etc will be entirely down to you or third-party libraries with associated integration issues. Also the simpler kernels do not utilise the MMU to provide protection between tasks and the kernel - typically everything runs as a single multithreaded application.
Broader and more comprehensive hardware support can be provided by 'heavyweights' such as Linux or Windows Embedded.
A middle ground can probably be achieved with a more fully featured RTOS such eCOS, VxWorks, Neucleus, or QNX Neutrino. QNX is especially strong on MMU support.
"Image processing" in an embedded box almost always means real-time image processing. Your number one concerns are going to be maximizing data throughput and minimizing latency processing overhead.
My personal prejudice, from having done real-time image processing (staring focal plane array FLIR nonuniformity compensation and target tracking) for a living, is that using an Intel x86-ANYTHING for real-time embedded image processing is a horrible mistake.
However, assuming that your employer has crammed that board down your throat, and you aren't willing to quit over their insistence on screwing up, my first recommendation would be QNX, and my second choice would be VxWorks. I might consider uCOS.
Because of the low-overhead, low-latency requirements inherent in moving massive numbers of pixels through a system, I would not consider ANYTHING from Microsoft, and I would put any Linux at a distant third or fourth place, behind QNX, VxWorks, and uCOS.
If you are needing to do real-time image processing, then you will likely want to use a Real-Time Operating System. If that is the route you want to take, I would recommend trying out QNX. I (personally) find that QNX has a nice balance of available features and low overhead. I have not used VxWorks personally, but I have heard some good things about it as well.
If you do not need Real-Time capabilities, then I would suggest starting with a Linux platform. You will have much better luck stripping it down to meet your hardware limitations than you would a Windows OS.
The biggest factor you should consider is not your CPU, but the rest of the hardware on your board. You will want to make sure that whatever OS you choose has drivers available for all of your hardware (unless you are planning on writing your own drivers), and embedded boards can often have uncommon or specialized chipsets that don't yet have open-source drivers available. Driver availability alone might make your decision for you.

A Development Machine in VirtualBox - (Debian-min vs ArchLinux vs recommend-one) [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have few years of exp on linux, mainly ubuntu (dual-boot). Now i am shifting to windows, and installing linux in VirtualBox (PUEL). I am looking for a light-weight distro for development machine setup. Thought of using debian-unstable-minimum, and installing build-essentials, openbox(or a little more feature light-WM, ps recommend), ssh-server, ethereal, iptables, nmap(maybe), vim, python3. That is all what i can think of now mainly.
Options I can think of --
Debian-unstable minimum, and then using apt-get to do the rest. Is there also recommended version of ubuntu-lite. I read Ulite is not good, some-others are also not that good.
ArchLinux, reading a great deal about it. Wikipedia says it is mainly a binary-based distro, but everywhere on net/community only talk about its source-based approach. If it is binary I think I can have a quick setup. (For guest-OS-ArchLinux guys in VBox ) is your guest-additions working fine in archLinux.
FreeBSD 8, is it possible for minimum install. And recommended.
Recommendations for other i686 optimized linux, if any, or lets say i386 is also fine, as will only use it for coding.
For system admins -
I would like to know if ArchLinux keeps the potential to penetrate companies for production systems, and replace redhat/debian/bsd in servers for hosting apps/portals.
Addition: Just a thought- is there any distro which helps you to be a better programmer/developer/analyst, in terms of the way things should be done. I dont know if I am over-generalizing it :).
some-others
Checked crunchbang? If you are not particular about needing the power of apt-get, you can also check-out zenWalk or Vector.
My work machine is a 3Ghz 4G Windows 7 box, on which I am running a 1G Debian VM under VirtualBox, it is a bit slower when accessing the HD but it is perfectly usable. I installed off the usual ISO image, and used apt-get to get the rest. Basically, I don't think on even semi-modern hardware you will need to go for a cut-down install to get a good user experience, (unless you particularly want to) to run just one VM. It runs the full GNOME desktop, Emacs, half a dozen terminals, Iceweasel web browser and the OCaml and Haskell compilers just fine. Make sure you install the VirtualBox extensions, they make a big difference to the interactive experience.
FWIW I have never gotten FreeBSD to work properly under VirtualBox, perhaps if you need that you would be better off with VMware, which does.

Resources