This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
I want to use linux for the ARM core of Zynq-7000. But have a quesion on:
Can I single step debug the kernel from the IDE instead of just printk? Does the hard ARM core allows single step into the kernel and expose all the registers, flags, pc?
The eclipse-based tools for PowerPC and Microblaze (the Xilinx SDK) can do single step, and also supports the Zynq-7000.
From the linked Xilinx webpage:
SDK includes an integrated debugger supporting Zynq-7000 EPP, MicroBlaze™, and PowerPC processors. You can set breakpoints or watchpoints, step through program execution, view the program variables and stack, and view the contents of the memory in the system. You can also simultaneously debug programs running on different processors (in a multi-processor system), all from within the same debug environment.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
Isn't the operating system an abstraction on top of the hardware?
Making hardware architectures irrelevant for software being run on the same operating system?
If so why do I need to choose my processor architecture (e.g: ARM or amd64) when downloading NodeJS for example?
Different platforms abstract away different things:
Java/WASM abstract away CPU architecture, memory model, device access, terminal output and file access.
Any program can run anywhere.
Linux/Windows abstract away device access, terminal output and file access.
Any program built for that CPU and ABI can run.
DOS abstracts away terminal output and file access.
Any program built for that CPU and ABI that includes drivers for devices can run.
BIOS abstracts away terminal output.
Any program built for that CPU and ABI that includes device drivers and file system drivers to load its own data can run.
You need to account for everything that is not abstracted away, and on Linux that includes the CPU architecture.
It's better than DOS where you additionally needed to make sure your program supported your sound card, but not as convenient as Java where a single Android app can run on both x86 and arm64.
You've probably heard programs can be compiled to "machine code." These are low-level instructions for the hardware, different for every type of machine (and are influenced not only by CPUs but also by peripherals).
The NodeJS interpreter is written in C and C++ and compiled to machine code. This compiled code is only valid on a particular type of a machine. So you need to download the correct version of the NodeJS interpreter for your machine.
You can write pure JS code to be run on NodeJS and then it will usually not depend on the machine type - it will be "universal" to an extent. But as soon as the JS code (this is usually true for some specific modules and libraries) uses native code (C, C++, & others) for performance reasons, this code gets to be compiled for a specific machine, and then the JS module also becomes bound to a specific machine.
The operating system has little to no influence in all of this. It basically says how the machine code will be written into a file (e.g. which file format to use), and abstracts access to hardware (such as the disk drives) in a way this code can use.
Historically, there have been attempts to create operating systems which would completely abstract the underlying machine in a way which makes programs completely portable. They usually do it by disallowing user-written machine code (i.e. user-compiled programs) to execute, and only allow interpreted code to run.
The operating system installed must support the processor(s), data buses and memory addressing of the hardware.
At a systems level, in kernel code and device drivers it is impossible to ignore details of the hardware architecture. Applications typically sit a level above all this but are still dependent on the abstraction layers below.
Incidentally, Node.js is written in part in C and C++ which takes advantage of the performance improvements offered by 64 bit processing. Wrangling an optimised performance has been a key objective of node.js design, it has been refactored more than once to that end.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
There are many distributions of Linux. All of them have one thing in common, however: the kernel. And Linux programs run across all of them. If I make a minimalistic distribution from the kernel, will current programs made for Linux run? What defines the differences of distributions? I am a beginner on this stuff, don't go harsh if it is a stupid question. Thank you
Yes, with caveats.
You need to make sure you have full C support and by that I mean something like glibc installed or installable or you cannot build programs for your minimal install. If you can install and compile C programs on Linux then you can in effect build practically everything else from scratch.
If you want to be able to download binaries and run them this is different, the binaries will likely require shared libraries that they would have on the systems they were built for. Unless you have those libraries you cannot run the existing binaries you find online.
What defines the differences of distributions?
There are a lot of defining factors in each distribution. If we disregard things like...
Licensing ie Redhat vs Debian
Stance on things like GPL/BSD/NonFree
Release schedules Debian Vs Ubuntu
Target audience ie Ubuntu vs Debian
I think the biggest defining factor is package management ie yum/rpm vs apt/dpkg and how the base configuration is managed on the machine. This is certainly the thing I seem to use the most and miss the most when I change distributions. The kernel itself is very rarely on my mind which is in part a large part of it's success.
Most people start with something like ISO Linux and get a bootable CD but even then you normally choose a base distribution. If you want to create a base distributions that's a ton of work. Have a look at this great info graphic of the linux family tree
https://en.wikipedia.org/wiki/List_of_Linux_distributions#/media/File:Linux_Distribution_Timeline.svg
If you look at Debian/Ubuntu the amount of infrastructure these distributions have setup is quite staggering. They have millions perhaps even billions of lines of code in them all designed to run on their supported versions. You might be able to take a binary from one of them and run it on Redhat but it's likely to fail unless the planets are in alignment etc. Some people think this is actually a bad thing
https://en.wikipedia.org/wiki/Ingo_Moln%C3%A1r#Quotes
The basic failure of the free Linux desktop is that it's, perversely, not free
enough...
Desktop Linux distributions are trying to "own" 20 thousand
application packages consisting of over a billion lines of code and have
created parallel, mostly closed ecosystems around them... The Linux package
management method system works reasonably well in the enterprise (which is a
hierarchical, centrally planned organization in most cases), but desktop Linux
on the other hand stopped scaling 10 years ago, at the 1000 packages limit...
If I make a minimalistic distribution from the kernel, will current programs made for Linux run?
Very few programs actually use the kernel directly. They also need a libc, which is responsible for actually implementing most of the C routines used by either the programs themselves or the VMs running their code.
It is possible to statically link the libc to the program, but this both bloats the size of the program and makes it impossible to fix security issues in the linked libraries without rebuilding the whole program.
Well, certain programs demand a specific version of the kernel. Usually these programs act as "drivers" for the rest of the system (e.g. nvidia proprietary drivers: some of them act in kernel space while others run in user space, but require that very specific kernel module and thus that very specific kernel build).
A less stricter case is when a program demand a specific capability from kernel. For example almost all modern Linux virtualization systems rely on cgroups feature. Thus, to use them you need to have a reasonably fresh kernel.
Nevertheless a lot of kernel API is stable, so you can rely on it. But usually programs don't call kernel routines directly. A typical way to use a kernel function is to call a correspondent library routine which wraps and leverages the kernel API. The main, most basic library of that kind is libc.
Technically programs compiled for one version of libc (as well as other shared libraries) can be used with slightly different versions of correspondent libraries. For example, a lot of people use Skype compiled for SuSE in completely different Linux distributions. That Skype is a pretty complex application with a lot libraries being linked in and so on, but nevertheless it works without any significant problem. So that with a lot of other proprietary programs, which couldn't be compiled for a given distribution or even for a given installation. But sometimes shit just happens :) Those binary incompabilities are quite rare but they happen from time to time.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Is it possible to take a device, say, a PDA, and wipe an software off of it and install your own?
For example, could I take a mac terminal program and install it onto a PDA (with wifi) and do SSHing and such?
And what language would / could it be in?
The language this could be in is not really the issue; it is, mostly, a matter of system compatibility.
Software applications do not run in a vacuum: they rely on the underlying operating system or for the least some form of virtual environment or a runtime such as Java, Silverlight etc.
Before one can re-purpose a PDA or other similar device, he/she need to install some system / host-software of sorts on it, and doing so can be rather complicated because of the proprietary and dedicated nature of many of the hardware subsystems therein.
General purpose systems such as Linux or Windows can be installed on various hardware platforms (including appliances) provided that:
- said hardware subsystems (CPU, keyboard/input devices, display device, storage devices...) comply to some specification, and
- the corresponding device drivers are available.
In the case of PDA, GPS appliances, smartphones and various other hardware platforms (and while many such platforms run on custom versions of Windows, Linux, Android etc.), there is typically enough proprietary differences, custom hardware and other deviations from specifications that installing alternative operating systems or runtimes is typically a challenge. Lack of documentation can also be a limiting factor.
Many such devices however host some form of runtime atop the system (Java in many cases), and rather than installing anew a alternative operating system, it is possible, in some cases, to install and run applications written in these hosted languages.
Even though, uninstalling existing applications (say to make room) and installing new applications may be a challenge as well. Difficulties arise because of
- purposeful "locking in" of the appliances (the manufacturers purposely prevent such re-purposing, using various forms of encryption, undocumented features and the like)
- intrinsic limitations of the runtime (whereby only a subset / sandboxed version of the language features is available).
In short, the specific approach for re-purposing appliances depends on:
the specific appliance/device: make, version etc.
the intended purpose: which particular uses are desired for the new device
the technical expertise and patience of the implementers ;-)
In general this is far from trivial: beginners beware! (*)
(*) BTW, the relative lack of sophistication apparent in the question seem to indicate the OP may not have the necessary skills involved in this kind of "hacking". It can however, be a very fun and rewarding learning experience.
No, but you can probably find a PDA terminal and do SSH with it.
Mac and PDAs have different architecture (their processors talk different language).
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 11 years ago.
This question is to practicing Linux kernel hackers:
Generally, it is best to test/play with linux kernel changes/hacks in a virtualized enviroment.
What virtual environment do you use for testing your hacks?
How do you make a minimalistic filesystem(with basic utils) to use with the environment. If you are using a readymade filesystem, what is that are you using?
Useful heuristics you do with your environment(like installing a new kernel, sharing files etc?
Please provide a step by step procedure to setup the environment, if possible.
A collection of this info doesnt seem available in web.
Thanks.
Different people use different set ups, I don't think there is one true answer.
I currently use VirtualBox as Hypervisor with a file system created with Buildroot.
Apart from other VMs (kvm, qemu, vmware etc.) you could also use User Mode Linux to much the same effect if your hacking is in the more "logical" layers of the kernel.
I'm currently using a Fedora14 VM running in QEMU/KVM on a Fedora14 host for my network driver development. I use a fairly standard install with the Software Development packages, plus whatever web or networking tools (e.g. wireshark) might be useful for the task. I typically set up a serial console on the VM and monitor it with minicom on the host - this helps me catch stack traces when I'm chasing a bug. I usually have my source and editing environment on the host machine with the files on an NFS file system that the VM mounts - this way I don't have to keep copying files to and from the VM. With the host running the same version kernel, I can compile the driver quickly on the multicore host and test it in the VM.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I was always attracted to the world of kernel hacking and embedded systems.
Has anyone got good tutorials (+easily available hardware) on starting to mess with such stuff?
Something like kits for writing drivers etc, which come with good documentation and are affordable?
Thanks!
If you are completely new to kernel development, i would suggest not starting with hardware development and going to some "software-only" kernel modules like proc file / sysfs or for more complex examples filesystem / network development , developing on a uml/vmware/virtualbox/... machine so crashing your machine won't hurt so much :) For embedded development you could go for a small ARM Development Kit or a small Via C3/C4 machine, or any old PC which you can burn with your homebrew USB / PCI / whatever device.
A good place to start is probably Kernelnewbies.org - which has lots of links and useful information for kernel developers, and also features a list of easy to implement tasks to tackle for beginners.
Some books to read:
Understanding the Linux Kernel - a very good reference detailing the design of the kernel subsystems
Linux Device Drivers - is written more like a tutorial with a lot of example code, focusing on getting you going and explaining key aspects of the linux kernel. It introduces the build process and the basics of kernel modules.
Linux Kernel Module Programming Guide - Some more introductory material
As suggested earlier, looking at the linux code is always a good idea, especially as Linux Kernel API's tend to change quite often ... LXR helps a lot with a very nice browsing interface - lxr.linux.no
To understand the Kernel Build process, this link might be helpful:
Linux Kernel Makefiles (kbuild)
Last but not least, browse the Documentation directory of the Kernel Source distribution!
Here are some interesting exercises insolently stolen from a kernel development class:
Write a kernel module which creates the file /proc/jiffies reporting the current time in jiffies on every read access.
Write a kernel module providing the proc file /proc/sleep. When an application writes a number of seconds as ASCII text into this file ("echo 3 > /proc/sleep"), it should block for the specified amount of seconds. Write accesses should have no side effect on the contents of the file, i.e., on the read accesses, the file should appear to be empty (see LDD3, ch. 6/7)
Write a proc file where you can store some text temporarily (using echo "blah" > /proc/pipe) and get it out again (cat /proc/pipe), clearing the file. Watch out for synchronisation issues.
Modify the pipe example module to register as a character device /dev/pipe, add dynamic memory allocation for write requests.
Write a really simple file system.
An absolute must is this book by Rubini. (available both as a hardcopy or a free soft copy)
He gives implementations of several dummy drivers that don't require that you have any hardware other than your pc. So for getting started in kernel development it's the easiest way to go.
As for doing embedded work I would recommend purchasing one of the numerous SBC (single board computers) that are out there. There are a number of these that are based on x86 processors, usually with PC/104 interfaces (electrically PC/104 is identical to the ISA bus standard, but based on stackable connectors rather than edge connectors - very easy to interface custom hardware to)
They usually have vga connectors that make it easier to do debugging.
For embedded Linux hacking, simple Linksys WRT54G router that you can buy everywhere is a development platform on its own http://en.wikipedia.org/wiki/Linksys_WRT54G_series:
The WRT54G is notable for being the first consumer-level network device that had its firmware source code released to satisfy the obligations of the GNU GPL. This allows programmers to modify the firmware to change or add functionality to the device. Several third-party firmware projects provide the public with enhanced firmware for the WRT54G.
I've tried installing OpenWrt and DD-WRT firmware on it. You can check those out as a starting point for hacking on a low-cost platform.
For starters, the best way is to read a lot of code. Since Linux is Open Source, you'll find dozens of drivers. Find one that works in some ways like what you want to write. You'll find some decent and relatively easy-to-understand code (the loopback device, ROM fs, etc.)
You can also use the lxr.linux.no, which is the Linux code cross-referenced. If you have to find out how something works, and need to look into the code, this is a good and easy way.
There's also an O'Reilly book (Understanding the Linux Kernel, the 3rd edition is about the 2.6 kernels) or if you want something for free, you can use the Advanced Linux Programing book (http://www.advancedlinuxprogramming.com/). There are also a lot of specific documentation about file systems, networking, etc.
Some things to be prepared for:
you'll be cross-compiling. The embedded device will use a MIPS, PowerPC, or ARM CPU but won't have enough CPU power, memory, or storage to compile its own kernel in a reasonable amount of time.
An embedded system often uses a serial port as the console, and to lower the cost there is usually no connector soldered onto production boards. Debugging kernel panics is very difficult unless you can solder on a serial port connector, you won't have much information about what went wrong.
The Linksys NSLU2 is a low-cost way to get a real embedded system to work with, and has a USB port to add peripherals. Any of a number of wireless access points can also be used, see the OpenWrt compatibility page. Be aware that current models of the Linksys WRT54G you'll find in stores can no longer be used with Linux: they have less RAM and Flash in order to reduce the cost. Cisco/Linksys now uses vxWorks on the WRT54G, with a smaller memory footprint.
If you really want to get into it, evaluation kits for embedded CPUs start at a couple hundred US dollars. I'd recommend not spending money on these unless you need it professionally for a job or consulting contract.
I am completely beginner in kernel hacking :) I decided to buy two books "Linux Program Development: a guide with exercises" and "Writing Linux Device Drivers: a guide with exercises" They are very clearly written and provide good base to further learning.