We are planning to release our Embedded Linux product using Yocto.
Currently, I see 'Warrior' is the stable release version.
https://wiki.yoctoproject.org/wiki/Releases
Looking at the poky source code, I find lot of tags.
https://git.yoctoproject.org/cgit/cgit.cgi/poky/refs/tags
How to decide on choosing the tag. I see poky-yocto tag, poky-warrior tag
Unless your hardware supplier provides you with a customized Yocto version (usually a Linux kernel, related patches and some recipes to setup specific hardware), you should always start with the latest stable version available at the moment, because it will contain the latest fixes from the community, and probably you are interested on them.
Related
I customize Linux (RHEL) Operating Systems releases for specific platforms/customers, etc... (and windows). We do kickstart installs, customize the branding, install specific packages, customize partitioning, etc...
I need suggestions on better ways to version control these OS's. Currently, we use SVN to Control the base installs (i.e. RHEL 5.6, 6.1, etc...) We don't upload the base RPMS to the Subversion server as they bloat the repo quick, only custom elements. We have YUM repos for each of the versions. A script gets run after checking out the base OS version which then performs a grab from the YUM repo for the specific package versions needed. I have been unable to find any other posts/guides doing exactly what I need to do.
We basically have to version control the package list for each OS release and do goofy things to get the packages into the base OS to create the final OS image which then gets installed via kickstart (different process for windows).
I find this cumbersome and leads to potential errors. There must be a better way! I've looked into an Artifact Repository for the non-modified components but not sure if this will significantly help me.
PS: Version control for each custom release is critical, I can't even just say RHEL 6.2 is RHEL 6.2, I have to be able to prove the custom release is the correct custom release somehow (as SVN would do).
ANY suggestions are appreciated!
Have you considered using Pulp? You can create a repository for each "release" that you want to track.
We are working on an embedded system using a MIPS(broadcom) core.
Now i want to patch vendor provided 2.6.31 kernel with apparmor patches.
However I cant find them.
According to http://wiki.apparmor.net/index.php/Main_Page the patches could be found in the Linux git tree at git://git.kernel.org/pub/scm/linux/kernel/git/jj/apparmor-dev.git. However, that tree cannot be found anymore.(maybe lost after the kernel.org breach ?)
Where can I find this patch now ?
Thx
2.6.31 is quite old at this point; if you can get your vendor to supply you with newer kernel sources, that'd be best.
If they cannot, you can take the patches from a distribution-provided kernel package from that era -- say, the openSUSE 11.2 kernel source rpm.
The primary AppArmor development repository is hosted on LaunchPad:
https://code.launchpad.net/~apparmor-dev/apparmor/master
The git repository you found was a mirror John made from the LaunchPad repository mainly for his own use. Somewhere along the way it was removed and replaced with:
git://git.kernel.org/pub/scm/linux/kernel/git/jj/linux-apparmor.git
The aa-next branch contains John's checkins intended for the next release.
There are apparmor tarballs located at the launchpad download page; the 2.5 tarball has patches for 2.6.24, 2.6.25, 2.6.26, 2.6.27, 2.6.28. The 2.5.2 tarball has patches for 2.6.36, 2.6.36.2, 2.6.37.
I have a Tegra Ventana development kit, on which I want to run Linux. (NVIDIA only makes Android 2.3 and 2.2 images available for the Ventana: see "Ventana Specific Downloads" at the bottom of http://developer.nvidia.com/tegra-ventana-development-kit.)
I found an announcement of the release of Linux For Tegra (L4T), which predates the release of the Ventana. I also found notice that L4T has been removed from NVIDIA's downloads page. So I guess if I want to run Linux on the Ventana (before NVIDIA eventually re-releases it) I'm going to have to build it myself. (Though that might be made easier if I can follow the breadcrumbs left by the "many of the community have already taken various linux distributions and got them up and going..." mentioned in the L4T announcement.)
How do I go about learning how to build Linux for this board?
I haven't been able to find any blog posts or mailing list entries from those people who have brought up various Linux distributions.
I did find http://tjworld.net/wiki/Android/Tegra/Linux/SourceCodeRepositoriesAndPatches#Linux, which gives me the Git URL to the Tegra tree (and the name of the maintainer). Lower down on that same page is a pointer to the linux-tegra mailing list. But I didn't find anything like a link to "getting started" instructions in the mailing list.
I'm reading Building Embedded Linux Systems, Second Edition (and have skimmed through Embedded Linux Primer, Second Edition), and perhaps I'll be able to figure it out for myself. (Though I suspect I'm going to get stuck because of the lack of technical documentation on the Ventana.) But I'd appreciate any advice to save me time.
Take a look at the people already running Ubuntu on Tegra 2 devices:
http://hdfpga.blogspot.com/2011/02/ubuntu-on-tegra-2-tablet-android.html
http://forum.xda-developers.com/showthread.php?t=894960
I'd like to set up a cross-compilation environment on a Ubuntu 9.10 box. From the documents I've read so far (these ones, for example) this involves compiling the toolchain of the target platforms.
My question is: how do you determine the required version of each of the packages in the toolchain for a specific target platform? Is there any rule of thumb I can follow?
This is a list found in one of the websites linked above:
binutils-2.16.1.tar.bz2
linux-2.6.20.1.tar.bz2
glibc-2.5.tar.bz2
glibc-linuxthreads-2.5.tar.bz2
gcc-core-4.2.0.tar.bz2
gcc-g++-4.2.0.tar.bz2
But suppose I want to generate executables for standard Ubuntu 8.04 and CentOS 5.3 boxes. What are the necessary packages?
My primary need is to avoid errors like "/usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.11' not found" in the customers' machines but in the future I want to deal with different architectures as well.
It is generally a good idea to build a cross-toolchain that uses the same version of libc (and other libraries) found on the target system. This is especially important in the case of libraries that use versioned symbols or you could wind up with errors like "/usr/lib/libstdc++.so.6: version 'GLIBCXX_3.4.11' not found".
Same Architecture
For generating executables for standard Ubuntu 8.04 and CentOS 5.3 systems, you could install the distributions in virtual machines and do the necessary compilation from within the virtual machine to guarantee the resulting binaries are compatible with the library versions from each distribution.
Another option would be to setup chroot build environments instead of virtual machines for the target distributions.
You could also build toolchains targeted at different environments (different library versions) and build under your Ubuntu 9.10 environment without using virtual machines or chroot environments. I have used Dan Kegel's crosstool for creating such cross-toolchains.
Different Architecture
As I noted in my answer to a another cross-compiler question, I used Dan Kegel's crosstool for creating my arm cross-toolchain.
It appears it may be slightly out of date, but there is a matrix of build results for various architectures to help determine a suitable combination of gcc, glibc, binutils, and linux kernel headers.
Required Package Versions
In my experience, there really isn't a rule of thumb. Not all combinations of gcc, binutils, glibc, and linux headers will build successfully. Even if the build completes, some level of testing is necessary to validate the build's success. This is sometimes done by compiling the Linux kernel with your new cross-toolchain. Depending on the target system and architecture, some patching of the source may be necessary to produce a successful build.
Since you are setting up this cross-compilation environment on Ubuntu 9.10, you might want to look into the dpkg-cross package.
Compiling for other Linux distributions is easiest by installing them in virtual machines (apt-get install kvm) and then doing the compilation from within. You can also script them to do it automatically. Building a cross-compiler and providing the exact same versions of all libraries and such, as the other Linux distro does, is nearly impossible.
My question is: how do you determine
the required version of each of the
packages in the toolchain for a
specific target platform?
...
binutils-2.16.1.tar.bz2
gcc-core-4.2.0.tar.bz2
gcc-g++-4.2.0.tar.bz2
Generally pick the latest stable: these only affect your local toolchain, not runtime.
linux-2.6.20.1.tar.bz2
You don't need this. (For targeting embedded platforms you might use it.)
glibc-2.5.tar.bz2
glibc-linuxthreads-2.5.tar.bz2
You don't need these. I.e. you should not download them or build them; you should link against the versions from the oldest distro you want to support.
Is there any
rule of thumb I can follow?
But suppose I want to generate
executables for standard Ubuntu 8.04
and CentOS 5.3 boxes. What are the
necessary packages?
You survey the distros you want to target, find the lowest common denominator versions of
of libc, libstdc++, pthreads, and any other shared library you will link with, then copy these libs and corresponding headers from the box that has these LCD versions to your toolchain.
[edit] I should clarify, you really want to get all the dependent libs from a single system. Picking and choosing the LCD of each file version from different distributions is a recipe for a quick trip to dependency hell.
Depending on your target platforms, have you considered using Optware?
I'm currently working on getting Mono and Moonlight built for my Palm Pre using the cross-compilation toolchain (and the Optware makefiles handle the majority of dependencies already).
I just started working for the first time with a product that's delivered via the Linux RPM mechanism, rather than as a standalone installer, and realized that this makes the test / release cycle a bit more tricky.
When I was working with installers, we would just change the build numbering in our build system to mark a build as a test or release candidate instead of a development snapshot, and tell people to install only the candidate build for testing. The problem with doing that with RPMs is that if we change the numbering system, we'll break the delivery mechanism and installed machines won't be able to tell which is the latest version of the RPM any more.
The best way I've thought of to get around this is to put the candidate RPMs in a completely separate RPM repository, but this also gets complicated because we have multiple RPMs coming from the same repository that are on different release cycles, so we'll be trying to pull the release candidate version of RPM A from the new repository while still wanting to get development snapshots of RPM B from the development repository.
This must be a pretty common issue for Linux software, so can anyone tell me the best practice ? Thanks in advance .....
One common methodology in the Linux world is to have a widely-publicized release number convention that indicates whether a build is development or release. For the Linux kernel itself, odd point releases (2.5, 2.7) are development, while even (2.4, 2.6) are releases.
A quick scan of the RPM guide seems to indicate that using a scheme like this may be the best bet.