Can a device driver written in or compiled to LLVM IR? - linux

The reason i am interested is that there is an everlasting problem with linux and proprietary drivers. Why hardware vendors do not ship their drivers in LLVM IR form?

You can write Linux device drivers in user mode code. I have seen demonstrations written in Python (handy for prototyping).
Presumably your idea is that hardware vendors could ship a LLVM IR driver, and then the driver would work with x86, ARM, or anything else? Most hardware vendors are not interested in niche-markets, and only want to support their hardware on particular platforms that they have tested on.
There is very rarely any interesting IPR in a driver (although there may well be in the library on top of the driver). If vendors wanted to support multiple platforms, they could just ship C code with instructions to build, and a restrictive (or even GPL) license.

Related

Is it possible to wholly simulate the source code for firmware locally?

At my work we are trying to debug firmware code locally as if one could debug it like typical software. Right now this is done through dongles and ran on the firmware itself. We have looked into QEMU but our firmware has some proprietary device models such that, if we customized the QEMU device models, would cause licensing issues due to QEMU's GPL license.
When I have looked around for alternatives to QEMU, there were about three kinds of emulation software that had licenses free enough such that device model customization would not be an issue:
OpenStack
bhyve
box86
However, none of these seem to fulfill a use case similar to QEMU, OpenStack is more for cloud applications, box86 is more for games, and bhyve doesn't even have a download link. And much of the other alternatives I found either had the same licensing issues as QEMU, did not have an open source license, or did not deal in firmware simulation.
This leads me to ask: Is there a tool out there that makes it possible to wholly simulate the source code for firmware locally?

What is SoC (system on chip)? Does Renesas V850 have a system on it?

I have experience writing a C program and burning the program into a chip using an IDE provided by the chip manufacturer.
I also heard that there is a concept called SoC, which means an operating system, like Linux, is running on a chip. In this case, I can run my program on the chip just like on a Linux PC.
I don't really know the differences between these two kinds of chips. Are they the same? Can I install Linux on every chip?
And I have to use a chip called Renesas V850 in my work. Which kind of chip is this V850?
SoC is just a marketing term for 'more than a processor on a chip'. It doesn't mean Linux or operating system.
Years ago, each part of a system was on its own chip: processor, serial port, memory, ADC, DAC, etc. You had a PCB and a schematic that tied them all together.
Over time, more and more got integrated into the processor, particularly for application-specific processors and microcontrollers. Today, pretty much only big iron processors like Intel and AMD flagship processors are stand-alone, and even then there's some x86 chip produced that are 'SoC's (like the AMD Geode line, if that's still around). Everything else has USB ports, serial ports, ADCs, DACs, even wireless radios integrated into the same die.
As for 'what is a Renasas v850?' You'd do better to google that and read the product documentation. It isn't an ARM or MIPs core, and it doesn't appear to support the mainline Linux kernel, only μClinux.
The Renesas V850 Wikipedia page states that the Linux kernel support for v850 has been absent since version 2.6.27 (which released in 2008).
Typically, you need to know what group your chip belongs to and to read more about it on Renesas website. They provide all the documentation you may need. There is also a section for application notes and sample code that may also help.

Porting PCIe driver from Linux to FreeBSD

I have a fairly large PCIe driver written on/for Linux, now I need to port it on FreeBSD. I don't yet know the BSD version, but I think at this point it's irrelevant, as I'd like to understand in general what major items will have to be modified during the porting efforts.
The good thing is that the driver is partitioned into OS independent "library" layer (OSI) and OS dependent, so it already has a "framework" permitting to port it on other OS-es, and I hope most of the efforts will be focused on OSI side. So far I see the following big chunks of work:
init code, i.e. the OS-specific code that "plugs" the driver into
system (similar to what init_module, cleanup_module does in Linux)
code registering driver in a PCI core subsystem of the kernel
character driver registration code 4) DMA operations
What else should I be paying attention to? This driver is a device doing hardware encryption, so it is offload device (ingress packets from NIC enter system normally and then diverted to the device).
If there are useful web links to description of BSD drivers development/porting (similar to LDD), I'd happily accept it :)
In 2011, Jeff Roberson (and later Mellanox) added some shims to ease porting Linux drivers, which makes most of the code be used as-is, when he ported the Linux InfiniBand drivers to FreeBSD. So, assuming I am some newcomer from Linux driver development world, I'd start by looking at:
https://svnweb.freebsd.org/base/head/sys/ofed/include/linux/
Where you would find implementations of many required Linux driver API and their FreeBSD native counterpart.
There is another quickstart document by John-Mark, here, helpful for those who are already familiar with driver writing.
If you would prefer starting from the beginning, I think the FreeBSD Architecture Handbook would be an useful start point.
Additionally, there is a book by Kirk McKusick, Robert Watson and George Neville-Neil, titled "The Design and Implementation of the FreeBSD Operating System", the latest version at this time is 2nd edition, and the chapter 8 detailed device drivers.
Most device drivers are merely wrappers of hardware operation to fit OS interfaces, so a well layered driver should be relatively easy to port nowadays.
If you have questions, or is a vendor of hardware, you can also join various FreeBSD mailing lists (freebsd-drivers#, etc.).

How to use QEMU for learning ARM Linux kernel development?

I want to learn it like developing some device driver etc and use QEMU for this because i have no hardware board for ARM like beagle board. What you guys suggest? Can i use Qemu simulator to learn Linux kernel on ARM targets? or any other option i should try ?
It depends on what you want to learn: hardware or software. If you really want to experiment with the different GPIO output to implement things like servo motor control, LED light blinking and display, a cheap board (eg, Raspberry Pi, about USD25) is much preferred.
But if you want to learn software in general, qemu is definitely much faster, and it lets you see the internal of what is happening. Experimenting with hardware will require oscilloscope etc. But experiment with software will depends on the error output of what others has implemented in their software.
As for drivers development, first version should be rapidly developed on QEMU. But testing which naturally involved hardware, should be done on the hardware.
Bottomline is: x86 is so much faster, that cross-crompilation is always done on x86 before it gets booted on the ARM board. Compiling on the board is too time consuming, and sometimes it may involved considerable amount of storage space for development libraries and source codes.
I used Qemu a while back to develop device drivers for an embedded programming class. It worked quite well. At the time we were learning device driver programming and then transitioning to Gumstix boards. I don't remember exactly what core we were using, but Qemu worked well.
I haven't done any ARM development, so I don't know if it is the best choice for learning ARM. But if you are new to drivers, it is probably a good place to start.
QEMU + Buildroot is great combination for ARM kernel development
Here is my setup that supports (mostly) both x86 and ARM: https://github.com/cirosantilli/linux-kernel-module-cheat
The kernel, toolchain, userland and QEMU are amazingly portable, that going from x86 to ARM is almost trivial.
Actually, you will seldom touch arch specifics, so you might as well start with x86.
I haven't played with ARM devices yet, only x86, but I bet it will be equally easy (i.e. not trivial due to lack of tutorials, but doable).

Why I need to re-compile vmware kernel module after a linux kernel upgrade?

After a linux kernel upgrade, my VMWare server cannot start until using vmware-config.pl to do some re-config work (including build some kernel modules).
If I update my windows VMWare host with latest Windows Service Pack, I usually not need to do anything to run VMWare.
Why VMWare works differently between Linux and Windows? Does this re-compile action brings any benifits on Linux platform over Windows?
Go read The Linux Kernel Driver Interface.
This is being written to try to explain why Linux does not have a binary kernel interface, nor does it have a stable kernel interface. Please realize that this article describes the _in kernel_ interfaces, not the kernel to userspace interfaces. The kernel to userspace interface is the one that application programs use, the syscall interface. That interface is _very_ stable over time, and will not break. I have old programs that were built on a pre 0.9something kernel that still works just fine on the latest 2.6 kernel release. This interface is the one that users and application programmers can count on being stable.
It reflects the view of a large portion of Linux kernel developers:
the freedom to change in-kernel implementation details and APIs at any time allows them to develop much faster and better.
Without the promise of keeping in-kernel interfaces identical from release to release, there is no way for a binary kernel module like VMWare's to work reliably on multiple kernels.
As an example, if some structures change on a new kernel release (for better performance or more features or whatever other reason), a binary VMWare module may cause catastrophic damage using the old structure layout. Compiling the module again from source will capture the new structure layout, and thus stand a better chance of working -- though still not 100%, in case fields have been removed or renamed or given different purposes.
If a function changes its argument list, or is renamed or otherwise made no longer available, not even recompiling from the same source code will work. The module will have to adapt to the new kernel. Since everybody (should) have source and (can find somebody who) is able to modify it to fit. "Push work to the end-nodes" is a common idea in both networking and free software: since the resources [at the fringes]/[of the developers outside the Linux kernel] are larger than the limited resources [of the backbone]/[of the Linux developers], the trade-off to make the former do more of the work is accepted.
On the other hand, Microsoft has made the decision that they must preserve binary driver compatibility as much as possible -- they have no choice, as they are playing in a proprietary world. In a way, this makes it much easier for outside developers who no longer face a moving target, and for end-users who never have to change anything. On the downside, this forces Microsoft to maintain backwards-compatibility, which is (at best) time-consuming for Microsoft's developers and (at worst) is inefficient, causes bugs, and prevents forward progress.
Linux does not have a stable kernel ABI - things like the internal layout of datastructures, etc changes from version to version. VMWare needs to be rebuilt to use the ABI in the new kernel.
On the other hand, Windows has a very stable kernel ABI that does not change from service pack to service pack.
To add to bdonlan's answer, ABI compatibility is a mixed bag. On one hand, it allows you to distribute binary modules and drivers which will work with newer versions of the kernel. On the other hand, it forces kernel programmers to add a lot of glue code to retain backwards compatibility. Because Linux is open-source, and because kernel developers even whether they're even allowed, the ability to distribute binary modules isn't considered that important. On the upside, Linux kernel developers don't have to worry about ABI compatibility when altering datastructures to improve the kernel. In the long run, this results in cleaner kernel code.
It's a consequence of Linux and Windows being developed in different cultural environments and expectations: http://www.joelonsoftware.com/articles/Biculturalism.html. In short: Windows is designed to be suitable for users, whereas Linux evolves to be suitable for open source developers.

Resources