Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
From what I have been reading a Linux distribution is little more than a packaging of a kernel with various packages and some limited configuration details, such as which window manager and GUI to use by default (assuming you even want a GUI, blech). In the old days apparently there were some unique advantages to distributions. For example, Red Hat had Red Hat Package Manager (rpm). Of course, nowadays rpm is no longer a unique advantage of Red Hat.
So, why even bother bother with a distribution? Why not just install a kernel and bunch of packages of one's own choosing? What's the complexity?
Basically, a GNU/Linux Distro IS a kernel and a "bunch of packages" (GNU packages) of one's choosing.
People creates distros to perform specific tasks, like server, desktop distros, multimedia oriented distros, etc.
Creating a linux distro can be a really educational task, as you can get to know how a linux system is build from scratch.
I recommend you cheking LFS (Linux From Scratch). Its a project to guide you on assembling your own linux distro from scratch, and believe me, its a great fun and indeed YOU WILL LEARN A LOT.
If you'r intereseted on getting to known how a linux distro works, don't miss this.
The webpage says:
Many wonder why they should go through the hassle of building a Linux system from scratch when they could just download an existing Linux distribution. However, there are several benefits of building LFS. Consider the following:
LFS teaches people how a Linux system works internally
Building LFS teaches you about all that makes Linux tick, how things work together and depend on each other. And most importantly, how to customize it to your own tastes and needs.
Building LFS produces a very compact Linux system When you install a
regular distribution, you often end up installing a lot of programs
that you would probably never use. They're just sitting there taking
up (precious) disk space. It's not hard to get an LFS system installed
under 100 MB. Does that still sound like a lot? A few of us have been
working on creating a very small embedded LFS system. We installed a
system that was just enough to run the Apache web server; total disk
space usage was approximately 8 MB. With further stripping, that can
be brought down to 5 MB or less. Try that with a regular distribution.
LFS is extremely flexible Building LFS could be compared to a finished
house. LFS will give you the skeleton of a house, but it's up to you
to install plumbing, electrical outlets, kitchen, bath, wallpaper,
etc. You have the ability to turn it into whatever type of system you
need it to be, customized completely for you.
LFS offers you added security You will compile the entire system from
source, thus allowing you to audit everything, if you wish to do so,
and apply all the security patches you want or need to apply. You
don't have to wait for someone else to provide a new binary package
that (hopefully) fixes a security hole. Often, you never truly know
whether a security hole is fixed or not unless you do it yourself.
Of course there are other tools to create a linux distro based on your HD installation, maybe for backuping purposes.
Linux Live
And lot of other scripts to get you started, just google for them.
Of course, all of them are like automatically tools oriented for the user, so don't expect to learn a lot from them.
There are lots, thousends of linux distros out there, so obviously is a waste of time to try to make the "ideal" linux distro and compite with ubuntu, mint, etc.
I still recommend you to check out Linux From Scratch, just as a weekend educative project . Trust me, you will learn a lot.
It covers also embedded linux distro creating, to target ARM processors and so.
If you're on the embedded world, Yocto Project worths a look.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
There are many distributions of Linux. All of them have one thing in common, however: the kernel. And Linux programs run across all of them. If I make a minimalistic distribution from the kernel, will current programs made for Linux run? What defines the differences of distributions? I am a beginner on this stuff, don't go harsh if it is a stupid question. Thank you
Yes, with caveats.
You need to make sure you have full C support and by that I mean something like glibc installed or installable or you cannot build programs for your minimal install. If you can install and compile C programs on Linux then you can in effect build practically everything else from scratch.
If you want to be able to download binaries and run them this is different, the binaries will likely require shared libraries that they would have on the systems they were built for. Unless you have those libraries you cannot run the existing binaries you find online.
What defines the differences of distributions?
There are a lot of defining factors in each distribution. If we disregard things like...
Licensing ie Redhat vs Debian
Stance on things like GPL/BSD/NonFree
Release schedules Debian Vs Ubuntu
Target audience ie Ubuntu vs Debian
I think the biggest defining factor is package management ie yum/rpm vs apt/dpkg and how the base configuration is managed on the machine. This is certainly the thing I seem to use the most and miss the most when I change distributions. The kernel itself is very rarely on my mind which is in part a large part of it's success.
Most people start with something like ISO Linux and get a bootable CD but even then you normally choose a base distribution. If you want to create a base distributions that's a ton of work. Have a look at this great info graphic of the linux family tree
https://en.wikipedia.org/wiki/List_of_Linux_distributions#/media/File:Linux_Distribution_Timeline.svg
If you look at Debian/Ubuntu the amount of infrastructure these distributions have setup is quite staggering. They have millions perhaps even billions of lines of code in them all designed to run on their supported versions. You might be able to take a binary from one of them and run it on Redhat but it's likely to fail unless the planets are in alignment etc. Some people think this is actually a bad thing
https://en.wikipedia.org/wiki/Ingo_Moln%C3%A1r#Quotes
The basic failure of the free Linux desktop is that it's, perversely, not free
enough...
Desktop Linux distributions are trying to "own" 20 thousand
application packages consisting of over a billion lines of code and have
created parallel, mostly closed ecosystems around them... The Linux package
management method system works reasonably well in the enterprise (which is a
hierarchical, centrally planned organization in most cases), but desktop Linux
on the other hand stopped scaling 10 years ago, at the 1000 packages limit...
If I make a minimalistic distribution from the kernel, will current programs made for Linux run?
Very few programs actually use the kernel directly. They also need a libc, which is responsible for actually implementing most of the C routines used by either the programs themselves or the VMs running their code.
It is possible to statically link the libc to the program, but this both bloats the size of the program and makes it impossible to fix security issues in the linked libraries without rebuilding the whole program.
Well, certain programs demand a specific version of the kernel. Usually these programs act as "drivers" for the rest of the system (e.g. nvidia proprietary drivers: some of them act in kernel space while others run in user space, but require that very specific kernel module and thus that very specific kernel build).
A less stricter case is when a program demand a specific capability from kernel. For example almost all modern Linux virtualization systems rely on cgroups feature. Thus, to use them you need to have a reasonably fresh kernel.
Nevertheless a lot of kernel API is stable, so you can rely on it. But usually programs don't call kernel routines directly. A typical way to use a kernel function is to call a correspondent library routine which wraps and leverages the kernel API. The main, most basic library of that kind is libc.
Technically programs compiled for one version of libc (as well as other shared libraries) can be used with slightly different versions of correspondent libraries. For example, a lot of people use Skype compiled for SuSE in completely different Linux distributions. That Skype is a pretty complex application with a lot libraries being linked in and so on, but nevertheless it works without any significant problem. So that with a lot of other proprietary programs, which couldn't be compiled for a given distribution or even for a given installation. But sometimes shit just happens :) Those binary incompabilities are quite rare but they happen from time to time.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have problem with making .img file which I can make it bootable in USB.
I have Folder consist of many RPM files include linux distrobution and some bash script file and etc. the bash script install Linux server and two other software besides set MySQL and PHP. I want to make .img file from this folder so when I just make the .img file bootable to boot from USB, it is supposed to automatically install everything in the computer.
My problem is that I don't know how to make such .img file. is there any specific bash command that should I use? Could you help me to do this? Some clues and documentations to read and understand the process or any software to use, is really perfect. I really appreciate your help.
Thank you.
The answer is:
Don't do it.
At least not in the way you're proposing.
You are specifying a solution to a problem without really defining your requirements. How many standard packages? How many/how big are your additions? What are the gating items (e.g. need web server, need sshd)? Do you need just a few standard packages or several hundred?
Linux distros, such as Redhat, CentOS, Fedora, Canonical/Ubuntu, Debian, expend considerable man-hours to get this right.
So, you need to know what distro you're using. By mentioning rpms, you're probably using Redhat, CentOS, or Fedora. They have procedures to create "live" CDs (and/or USB sticks) and "full install" DVDs. But, this can be a big job, particularly if you're trying to graft on extra files that they don't know about.
I highly recommend you use a standard installer for your distro [that has been heavily QA'ed]. Then, after installation and reboot, extract your additional packages from separate media or download from a server you control. The install media for your stuff could consist solely of a bash script that creates /etc/yum.repos.d/mystuff.repo and kicks off a yum install mystuff
Also, if you were to do a plug-in-and-boot installer, do you want it just erase/repartition the main hard drive without asking (e.g. full automatic)? Or, do you want it to show the existing partitions, etc. like standard installers do?
Getting back to requirements, why do you need to have a one shot installer?
How many systems are you going to install this on? 5, 10, 100, 1000? Are they all in a server farm? You might be better off with PXE boot and boot/install from a central server.
For example, Google has hundreds of thousands of servers [or more]. They have a need for this. But, they also have entire teams of developers devoted to the in-house methodologies that they use.
How often are you going to have to do this for a given server? After the initial install, what is your plan/method for updates (e.g. yum, etc.)?
By using the standard install, you're not responsible for QA of the entire system [standard system + your custom stuff], only your custom stuff.
For example, Fedora discourages any "full install". They now prefer the "live boot" and install from Internet approach.
One of the reasons is that the full install disk gets created [with lots of QA]. But, it's static. If a package has an update, the full install will install the old, unpatched version.
I've had cases where I used it, then did yum update after reboot. The full install disk installed some things that became obsolete/incompatible within a week or two after release. They clashed with the update and things became broken. I had to intervene manually. This is much less likely to happen with a live boot install that will download the latest [and presumably most bug fixed] packages.
On the fedoraproject.org site, you can find documentation on creating live CDs and/or USBs. They can even show you how to add some custom files. Other distro sites will have similar documentation.
BTW, I have doing OS install kits since 1981, so all of the above comes from experience. I've created them from scratch and hacked up ones from distros.
Can it be done? Sure. Do you really want to do it or should you want to do it? Well, maybe. Just be aware of what you're taking on in terms of maintenance.
I am indie game developer working on Windows platform, but I have actually little to none experience with Linux and deploying apps for it. I am polishing my game written in C++'11 based on SDL 2.0 with several other cross-platform dependencies (like AngelScript or PugiXML) on Windows and I want to distribute it over Linux too and have a few question about that. The game is commercial, closed source which is currently on Steam's GreenLite, but I want to distribute free alpha version downloadable from my website regardless of GreenLite status.
1.) Are the main Linux distributions ABI (application binary interface) compatible? Or do I need to compile my game on every supported distribution/platform?
2.) If so, which distributions/platforms are reasonable choices to support?
3.) What is the best way to install an app and it's dependencies on Linux? I've read about deb and rpm systems, but it's still confusing - is there any way to automatically generate setup packages for various distributions?
4.) How does Steam work with Linux? How should I prepare my app for distribution via it?
Excuse me if I ask wrong questions, the whole world of Linux is pretty new to me and I got lost reading various articles and manual pages...
This depends on what the distribution is derived from. Generally, there's no need to recompile a program on something like Ubuntu under Fedora so long as the code remains unchanged. Since Ubuntu and Fedora would have to be using the same libraries (albeit in different locations perhaps) and anything OpenGL-related would be a driver issue; therefore it is hardly a requirement to recompile your software. I would be extremely surprised if you had to recompile your software since all distros use pretty much the same set of libraries/got bash/use the Linux kernel but have different package managers. The last part is where it gets complex:
The aforesaid distributions have different package managers which requires you to repackage your software accordingly. You could release pre-compiled binaries under a tar.gz file and simply have distro maintainers package the software for you; though if you want to control how your software is distributed then you should better do that yourself. Because of this issue surrounding the many package managers out there, people still resort to recompiling source code through a make file which can be generated through cmake. This only happens if certain dependencies are, for whatever reason, 'renamed' in which case because of a simple name change the program magically doesn't find the dependency. Since there's also no naming convention it even makes life harder. The most important virtue here is Trust, and Trust therein to have developers follow naming conventions so everyone can reference the same package of the same name.
The best distros to support are the most popular ones: Ubuntu and openSUSE would be great starting points. Linux Mint, Fedora and Red Hat and Debian use package managers likewise of the aforesaid.
Meanwhile, you should know that you can't statically link GPL'd code in your software without also making your software also GPL. A way to work round this is to resolve dependencies by either: A. Including the relevant dependencies in the same folder as the executable (much like *.dlls on Windows) or to depend on the system upon which your program is run to look inside the same directories your program looked into while compiling and linking. The latter is actually riskier since it bears the assumption the user will have the libraries and unmodified. The former will make your overall software use less storage, but including the dependencies would only mean increasing the size of your package but ensures consistency across all systems.
As for installation, you would need a bash executable that moves the contents of your directory to the right locations. Generally, the main app goes into /usr/bin and any app-related data goes into the home folder. This again depends on the distro; look for a .local directory, or you could create a directory dedicated to your app that is hidden. Hidden folders are prefixed with a period. Why put this stuff in the home folder, and what? The why: because the home folder gives read-and-write permissions to everyone by default. The what: anything that needs to be run with the app without the user having to authorise it. Dependencies should thus be located in the home folder, preferably under your own directory. Different conventions follow; some may disagree with me on this one.
You might also want to use steam's API which does most of this work for you. Under Steam, your app may be under steam's own directory and thus functions as a steam APP with all the functionality therein.
http://www.steampowered.com/steamworks/
To find out more about how to get your app on steam. I have to say I was really impressed, and they even include code samples. The best part is that this API is on Linux as well. I don't know much apart from that Steam would be handling the execution of your app through its own layer. Wherein, there's no need to independently distribute your app the previous steps.
Note that you can also distribute your software through the Ubuntu Software Centre if you are interested.
http://developer.ubuntu.com/apps/
Though, Ubuntu has more focus on getting apps running regardless of platform.
Know that Linux has no single convention, but its convention is simply derived from pragmatism, not by theory. At the end of the day, how you want your software run on Linux is up to you.
I'm not a game developer and I come from open source community so can't really advise on delivering binaries. I'll try to answer some of your questions though:
Valve has a steam runtime you can target on linux (https://github.com/ValveSoftware/steam-runtime) - this would be the best way to port your game. I saw it mentioned in one of their Linux dev videos on youtube my understanding is it bundle a bunch of libraries inclding SDL and it is setup to emulate a specific version of ubuntu. So if you write your game against the steam runtime it will run in any linux distro that steam has been ported too.
As for natively packaging your game one thing to consider is if you package it as a deb or RPM and instruct it to depend on distro provided libraies than your app may break when the libraries are updated (some distros update libs quite often - others are more stable). Dynamically linking against system libraries works well for open source since people can patch the code when libraies change not ideal for close sourced stuff.
You can statically link your binary at build time, which means you have a larger sized binary. But than you don't have to worry about app breaking when libs are updated.
Some programs like Chrome bundle their own libs (which are essentially forks of the system libs) again this makes download size much larger but also has potential to cause security problems, people tend to frown on this. (see: http://lwn.net/Articles/378865/)
1.) No, ABIs of main Linux distributions are not fully compatible. In a narrow sense, they are mostly compatible. But in a broader sense, there are many differences in which some elements of the system work, are configured, etc.., OTOH, making "packages" is not a big problem per se, just some work. Also, what works for a "major" distribution (like Debian) works (pretty much always) on all derivatives (like Ubuntu, Mint...).
2.) A good start list to support is: Debian (.deb), RedHat and Fedora (.rpm both). Their package formats and tools are mature and well known. This will "cover" a lot of derivatives.
3.) There are some "cross-distribution" package builders, but are mostly not up for the task. Writing a definition script for most of package formats is not hard, once you get a hang of it. Also, some commercial tools have "Installation script generators" for Linux which take care of things for you (they don't generate .deb or .rpm, rather a complex shell script, similar to a Windows EXE installer)
4.) Sorry, I don't know much about Steam. O:)
So, from my experience, the best way to do it is to accept the ugly truth that you have to select a few major distributions and their versions ('cause things can change very much between versions) and make packages for them, and test often on all of them. And be happy that you're not developing some long-lived kernel module/driver, because if you add tracking kernel API changes to the whole picture... :)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've been using linux at university for quite a while, and it seems much more customisable and better for coding.
So I want to switch to linux from windows 7 at home.
What branch of linux should I use? I'm an emacs user if that gives any insight.
Which desktop enviroment should I use? At uni we use KDE, but it's too graphical, often I just click on stuff instead of using the terminal. I want one where it encourages me to use terminal more.
and the biggest question, how do I install it all? Should I put everything on external hard drive and wipe my computer completley?
I primarily program in Java and python.
I would recommend that you first try using Linux off Live CD/DVD. Linux Mint, Ubuntu, etc.
Just download and burn .iso onto blank media and boot your computer off of it. Just play around, check various desktop environments, see if all your hardware work with the specific Linux distribution. This step is very useful to decide which distribution you actually want to install onto your computer, especially the latter since, while it has been improving, the biggest obstacle you may face in configuring your computer to run on Linux is often hardware incompatibility. Just make sure everything that you need to work actually works.
If you have no issues wiping out Windows, Linux installation is pretty straightforward these days. It even takes less time in general than re-installing Windows. I would browse the web for an installation note for your specific computer model to see if anyone has already successfully done so, so that you can just follow. That saves a lot of time.
I use Debian (Wheezy now) and KDE. It's very easy to install and switch desktop environments after installing Linux though, so that shouldn't be any concern.
I suggest creating a virtual machine using VMWare or Virtual Box. As far as the distribution goes, Linux Mint and Ubuntu are pretty user-friendly for first time installations. And for the desktop environment, I suggest XFCE.
A few Google searches will do you good. I think a virtual environment will be much more easier to manage than partitioning a hard-drive.
Well, the installation step, if you use Windows 7, you may want to make a full backup of your hdd - so if things go wrong you will be safe and able to recover.
I was in somewhat similar situation recently - figuring out which linux distro to use. Previously I had luck with ScientificLinux, but this time it didn't like my laptop hardware for some reason - after wake-up wireless network card was getting stuck and wasnt picking any signal. I didn't want to recompile kernel etc., so I installed Ubuntu, but the Gnome 3 was a show stopper - I had to roll back to Gnome 2, but later I tried and liked a lot XFCE desktop - which I use right now on my workstation and laptop.
Java, Python and emacs probably work well with any linux distribution out of the box, so it is up to you which one to choose after all. Good luck!
Sorry, forgot to mention - all contemporary Linux distributions are able to install a dual boot feature - so you can keep your Windows 7 setup along with Linux (if you have enough of free space), moreover Windows partition will be accessible from Linux which is handy sometimes.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I want to build a lightweight linux configuration to use for development. The first idea is to use it inside a Virtual Machine under Windows, or old Laptops with 1Gb RAM top. Maybe even a distributable environment for developers.
So the whole idea is to use a LAMP server, Java Application Server (Tomcat or Jetty) and X Windows (any Window manager, from FVWM to Enlightment), Eclipse, maybe jEdit and of course Firefox.
Edit: I am changing this post to compile a possible list of distros and window managers that can be used to configure a real lightweight development environment.
I am using as base personal experiences on this matter. Info about the distros can be easily found in their sites. So please, focus on personal use of those systems
Distros
Ubuntu / Xubuntu
Pros:
Personal Experience in old systems or low RAM environment - #Schroeder, #SCdF
Several sugestions based on personal knowledge - #Kyle, #Peter Hoffmann
Gentoo
Pros:
Not targeted to Desktop Users - #paan
Don't come with a huge ammount of applications - #paan
Slackware
Pros:
Suggested as best performance in a wise install/configuration - #Ryan
Damn Small Linux
Pros:
Main focus is the lightweight factor - 50MB LiveCD - #Ryan
Debian
Pros:
Very versatile, can be configured for both heavy and lightweight computers - #Ryan
APT as package manager - #Kyle
Based on compatibility and usability - #Kyle
-- Fell Free to add Prós and Cons on this, so we can compile a good Reference.
-- X Windows suggestion keep coming about XFCE. If others are to add here, open a session for it Like the distro one :)
Try using Gentoo, Most distros with X are targetted towards desktop user and by default includes a lot of other application you don't need and at the same time lacks a lot of the stuff you need. YOu could customize the install but usually a lot of useless stuff will get into the 'base' install anyway.
If you worried about compile time, you can specify portage(the getoo package management system) to fetch binaries when available instead of compiling. It allows you to get the flexibility of installing a system with only the stuff you want.
I used gentoo and never went back.
http://www.gentoo.org/
I installed Arch (www.archlinux.org) on my old MacMini (there is a PPC version) which only has 512MB RAM and a single 2.05GHz processor and it absolutely flys!
It is almost bare after installation, so about a lightweight as you can get.. but comes with pacman, a software package manager, which is as-good-as apt-get (ubuntu/debian) if not better.
You have a choice of installing many desktop managers such as: awesome, dwm, wmii, fvwm, GNOME, XFCE, KDE, etc.. straight from pacman using a single line of code.
In my opinion(!!) it's lightweight like Gentoo but a binary distro so it isn't as much hassle (although I can imagine it can be a little daunting if you're new to Linux). I had a system running (with X and awesome WM) in about 1.5 hours!
I'm in a similar situation to Schroeder; having a laptop with 512mb RAM is a PITA. I tried running Xubuntu but tbh I didn't find it that it was either useable or a great saver on RAM. So I switched to Ubuntu and it's worked out pretty well.
My 2c:
I'd recommend basing your system on Debian - the apt system has become the de-facto way to quickly install and update programs on Linux. Ubuntu is Debian based with an emphasis on usability and compatibility. As for windowing managers, in my opinion Xfce hits the right balance between being lightweight and functional. The Ubuntu-based Xubuntu would probably be a good match.
Remember - for security only install essential network services like SSH.
If it were my decision, I would set up a PXE boot server to easily install Ubuntu Server Edition to any computer on the network. The reason why I would choose Ubuntu is because it's the one I've had the most experience with and the one I can easily find help for. If I needed a windowing manager for the particular installation, I would also install either Xfce or Blackbox. In fact, I have an old laptop in my basement that I've set up in exactly this way and it's worked out quite well for me.
I would recommend to use Archlinux which I'm using now. XFCE is my choice for desktop environment by now but if you prefer more lightweight one you can try LXDE
Archlinux is much like Gentoo but with binary packages prebuilt and with more simpler configuration
If all those distos still won't work for you, you may want to try LFS - Linux From Scratch
I would recommend Xubuntu. It's based on Ubuntu/Debian and optimized for small footprint with the Xfce desktop environment.
I am writing this on a Centrino 1.5GHz, 512MB RAM running Ubuntu. It's Debian based and is the first Linux distro I have tried that actually worked with my laptop on first install. Find more info here.
Second the Arch suggestion. You will be tinkering quite a few configuration files to get everything going, but I've found none better for a lean and mean setup.
I suggest you should checkout the following three distros:
Damn Small Linux - Very lightweight. Includes its own lightweight browser (Dillo), but you can install Firefox easily. The entire distro fits on a 50MB LiveCD.
Slackware - Performance wise Slackware will probably perform the best out of the three, but I'd suggest running your own benchmarks with your hardware.
Debian- Debian is extremely versatile. This is the only distro of the three I'd recommend for both a 32 bit 1GB RAM laptop and also a 4GB RAM 64 bit machine.
I would recommend something mcuh lighter than XFCE: IceWM. It takes so time to configure it to be really usable, but it's worth it. I have a fully running IceWM which only takes about 5MB of RAM.
The primary reason I use Linux is because it can be lightweight. In 1999, I used Redhat, Mandrake (now Mandriva), and Debian. All were faster and more lightweight than my typical Windows 98 installations.
Not so anymore. I now have to research and experiment in order to find distros that are lightweight in both storage and memory footprint, and speedy. These are the ones I have played with lately:
Slitaz, a French distro (I use the English version and it works well).
Crunchbang, a lightweight Ubuntu and Debian-derived distro
Crux, which is source-only and very low-level geeky (I chose it because it has good support for PowerPC, and I was using it on my aging Powerbook G4)
Currently, however, I use Archlinux for most of my work, as it offers a good compromise between lightweight and feature-full.
But if you decide to roll your own distro from scratch, you may want to try Buildroot or Openembedded. I do not have much experience yet with Openembedded, but using Buildroot I have been able to create a very simple OS that boots quickly, loads only what I want, and only takes up 7 MB of storage space (adding development tools will increase this greatly, of course; I am merely using it as an ssh terminal, although I can do some editing with vi, and some text-only web browsing).
As far as window managers, I have been very happy with OpenBox. I frequently experiment with lighter-weight window mangers listed on this page, however.
here is my opinions as well. I have used Fedora, Gentoo, SliTaz, Archlinux, and Puppy Linux for development. The constraints: the system virtual image had to be under 800mb to allow for easy download and include all necessary software. The system had to be fast and customizable. It had to support version control SVN and Git, XAMPP or LAMP, SHH client, window environment (X or whatever) with latest video drivers/higher resolution, and some graphical manipulation software for images.
I tried Archlinux, Puppy, and SliTaz. I have to say that SliTaz was the easiest to work with and to set up. The complete base-OS install from the image is around 120mb using the cooking version. TazPkg is a great package manager but some of the listed packages were outdated. Some of the latest versions needed to be built from source code.
SliTaz is extremely lightweight and you have to live with some older packages in the supported TazPkg package list. There is increasing support and XAMPP, Java, Perl, Python, and SVN port well using TazPkg with latest versions. SliTaz is all about customization and lightweight. The final size was 800mb with all necessary software. ArchLinux and Pupppy, although also lightweight were over 1.5GB after all of the software was installed. The base systems were not comparable to SliTaz.
If anyone is interested in a virtual image for SliTaz with XAMPP to try out, contact away and link will be posted.
All the best and happy development! :)