I'm building an embedded system using buildroot and i want to replace nginx+php with node.js.
My system is using eglibc but i want to reduce the size of the system so i want to switch to uclibc. node.js can be compiled using uclibc-0.9.32-nptl?
More, while my current test hardware is x86 based, alix, i'll switch, in a couple of months, to a plug computer, that is arm based.
Will node.js works on arm based hardware?
Just for your interest: I compile Node.js 0.4.7 on sheevaplug just taking into account
http://code.google.com/p/v8/issues/detail?id=836
https://github.com/joyent/node/issues/883
Enjoy!
As it is built in an OpenEmbedded recipe, you should be able to build it in Buildroot (with some adaptation to transform bitbake script to makefile one), yet nothing limit node.js to be compiled on an ARM platform or with a uclibc (at least till version 0.4.2).
Related
How can one test a Linux Kernel for a specific Target? Also keeping in mind Point 12 from https://www.kernel.org/doc/html/latest/process/submit-checklist.html
Has been tested with CONFIG_PREEMPT, CONFIG_DEBUG_PREEMPT, CONFIG_DEBUG_SLAB, CONFIG_DEBUG_PAGEALLOC, CONFIG_DEBUG_MUTEXES, CONFIG_DEBUG_SPINLOCK, CONFIG_DEBUG_ATOMIC_SLEEP, CONFIG_PROVE_RCU and CONFIG_DEBUG_OBJECTS_RCU_HEAD all simultaneously enabled.
Consider the scenario, I want to compile a Linux Kernel for an ARM based target. I have Linux source files and I code on my Ubuntu Host PC. I did some research and found out that there are Linux Kernel self-tests in tools/testing directory. It says on this page from kernel.org
that
These are intended to be small tests to exercise individual code paths in the kernel. Tests are intended to be run after building, installing and booting a kernel.
But when I run this makefile and execute the script, it tests the kernel on my Host system. And it does not test the source files of the Kernel i want to build.
Is my understanding correct?
What I want to do is run this self-test suite after my ARM Target has been booted. Is this possible ? How can one run these selftests or some other kind of tests for Linux Kernel?
My knowledge background: Embedded C developer, Certified Linux Foundation Sysadmin. I have generated custom images for RaspberryPi using Yocto. So i have limited knowledge in Yocto. I have no knowledge in Linux Kernel Driver Development.
I have an (xxx.so) shared library file and its based on powerpc linux.
now i want to use it in our project but first we need to convert it to X86 or X64 shared library for pc linux.
anyone can help me about this? Is it possible?
You can't: a binary file resulting from compilation of a C/C++ program is compiled for a specific Instruction Set Architecture. Since the two platforms have different instructions sets, you cannot convert the library of an architecture to the other. You need therefore to recompile the library.
First I would say that if you are trying to use functions from a library on one linux to another then there is a very good chance the library is available (somewhere) for your target machine. The odds are that the source code should be available.
If the library is specific to tat version of Linux then you should not be using it for cross platform work.
That said in principle you can :
Run the powerpc version of linux on an x86 or x64 system using a VM.
Try the the decompile-recompile route already suggested to you.
Write your own equivalent library for the target.
See if anyone has written a binary translation application for this purpose.
But if the library you want to use is simply not available for the system you are targeting, then you quite simply should not be trying to use it. It's practically suicidal programming practice.
I'm working on a side project which requires me to configure and compile a tiny Linux System based on Ubuntu.
The result should be a tiny OS with the following features:
A Bootloader
A Kernel
A Process
A Thread
Miscellaneous (if possible)
A File System
Virtual memory
A Console
I read lots of documents about it, one of them being: http://users.cecs.anu.edu.au/~okeefe/p2b/buildMin/buildMin.html#toc3
I deleted the file system, and recompiled the kernel using make xconfig. I tried to deactivate modules and configurations many times, but it's not working for me.
How can I configure the kernel for the OS with only the features I listed above? What options can I disable or enable while still having a working system?
Having the Kernel very small is not important for Ubuntu, so maybe choosing Ubuntu is part of your problem. I would use as starting point what OpenWRT does. They do a good work making the Kernel small and it is easy to get started. OpenWrt Buildroot – Usage
try Linux From Scratch. It is a step by step approach on building a minimal Linux system from which you can evolve later on. http://www.linuxfromscratch.org/.
Use Gentoo Linux distribution - it's great for practicing on creation of Linux systems. Gentoo has excellent setup documentation, for example about configuring the kernel.
And Gentoo is a little easier and faster to setup than Linux From Scratch (LFS). If you want to go deeper, then LFS may be a good learning step too.
We are porting a solution to ARM that was originally designed to run on x86/x64 Debian based systems.
So far so good however along with this solution we ship out a printer that is compatible and comes with drivers for Linux (x86 and x64), unfortunately the manufacturer does not have ARM drivers for it, nor is capable of compiling some from source code (don't know why).
I've installed the printer with CUPS and used the x86 binary. But of course, whenever I send a task to the printer, the ARM system cannot use the binary and naturally CUPS reports:
/usr/lib/cups/filter/rastertotg2460 failed
I would like to know how I can run x86 binaries on ARM v6 based systems?
The ARM operating system is Raspbian running on a Raspberry Pi B+ board and the binaries (if you want to take a look) are here.
EDIT:
I was also made aware of this proprietary solution that claims to make it possible running x86 binaries on ARM systems, but all demonstrations are for ARM v7 systems, not sure if it will work on Raspbian with a Raspberry Pi B+ board.
I think this is going to require some serious work, but I had it the wrong way around initially.
Since you want to drive the printer, you're going to have to do the x86 emulation "inside" the CUPS system. It's not enough with a stand-alone x86 emulator, since those aim to give you a full x86 system with peripheral hardware and stuff. You don't need that, you just need to drive the printer.
I can imagine using some kind of x86 emulation library inside a CUPS "virtual" driver, which in turn loads the x86 binary you have and feeds it into the emulator. It would then need to expose the expected CUPS environment to the x86 code inside the emulator.
Something like Soft86 might be a good starting-point.
I distribute a statically linked binary version of my application on linux. However, on systems with the 2.4 kernel, I get a segfault on startup, and the message: "FATAL: kernel too old."
How can I easily get a version up and running with a 2.4 kernel? Some of the libraries I need aren't even available on old linux distributions circa 2003. Is there an apt-get install or something that will allow me to easily target older kernels?
The easiest way is to simply install VirtualBox (or something similar, e.g. VMWare),Install CentOS 3 or any suitable old distro with a 2.4 kernel and build/test your app on that.
Since you're getting a "kernel too old", chances are you're relying on some features not present in 2.4 kernels so you'll have to trace down and rework that.
The error might simply be caused by linking statically to glibc, you could try linking to glibc dynamically and all your other libs statically, though to be backwards compatible you'd have to build your app on an old glibv system. Using the lsb tools to build could help too
For my use case, I can't statically link my supporting libraries. Also, current Linux distributions seem to make this difficult to accomplish for certain situations. But I needed my application binaries to run on 10-year old Linux systems.
I also didn't want to limit myself to an ancient 10-year old C/C++ compiler. I also found that the hardware I needed to use prevented me from installing a 10 year old Linux distribution for some reason.
So, I did this:
Installed docker.
Within a docker instance, install a 10-year old Linux system (I used Debian's Lenny distribution). This has the added advantage of making this build system available to any other machine that can run docker.
Within the docker instance, build the current GNU compilers (8.3.0 when I did this).
This gave me a modern compiler that compiled binaries that would run on very old Linux systems. I did this for both 32-bit and 64-bit processors.
From there, I created a series of scripts that allowed me to use the docker-contained cross-compiler to build all my supporting libraries. I made sure to set the rpath to my compiled binaries to a path relative to my binaries (using -Wl,-rpath,$ORIGIN/../lib), and a built a script to retrieve any supporting libraries from the compiler, using g++ -print-search-dirs to get the paths, ldd to get the supporting libraries I needed from my binaries, and some aggressive bash scripting to find the supporting libraries existing within the search-dirs from g++, dropping these libs into the rpath I set up.
From there, I package my binary accordingly, with all supporting libs.
Yeah, this is somewhat painful, but it results in a fully functioning binary capable of working on ridiculously old linux systems without having to install different Linux distributions on multiple virtual machines.
I tried creating a proper cross-compiler (native to the current Linux distribution hosting my docker images), but found it too difficult to work with, even with the best tools I could find to help me. Compiling the compiler within a docker image took far less of my time, and worked rather smoothly.