Setting up Environment for Buffer Overflow Learning - security

I am currently reading several security books(my passion) regarding secure programming, however either the distro's they provide on disc are faulty, or non-existent.
Books:Hacking The art of Exploitation 2nEd, Grey Hat hacking 2nEd
The issue is that when i try to follow the examples, obviously newer distros have stack protection and other security features implemented to prevent these situations, and I have tried to manually setup the environment provided with Hacking the art of exploitation, but I have failed.
Also I have tried DVL(Dam Vulnerable Linux) but its way too bloated, I just want a minimal environment that I can have in a small partition and choose from bootloader OR have in a small virtualbox.
So my question is this: How do I go about setting up an environment(distro old kernel) that I can follow most of these examples in. Possibly if someone could tell me the kernel and GCC version of DVL I could get most of it setup myself.

You need to rebuild the kernel without stack and heap protections including non-executable stack. You then need to compile using gcc flags to turn off the protections, one of which would be "-fno-stack-protector". Also because you will run into it soon enough you probably want to statically compile your program because it will be a bit easier to understand it when you are debugging into your 0x41414141 payload.
Also depending on your definition of "bloat" it might be easiest to just download an older distro of linux, redhat 5 or an old slackware and install and use that with the default toolchain.

If you still have DVL available, you can use the commands:
$ uname -r
$ gcc --version
to find out for yourself.
Edit: according to distrowatch.com the linux kernel is 2.6.20 and gcc is 3.4.6

There is an article on the sevagas website that is related to your question :
How-to setup a buffer overflow testing environment

Related

how to cross compile CUPS on ARM in Linux environment? [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I am interested in cross-compiling a Linux kernel for an ARM target on a x86 host. Are there some good practices you recommend? Which is the best cross-compile suite in your opinion?
Have you settled up a custom cross-compile environment? If yes, what advices do you have? Is it a good idea?
There are two approaches I've used for ARM/Linux tools. The easiest is to download a pre-built tool chain directly.
Pro: It just works and you can get on with the interesting part of your project
Con: You are stuck with whichever version of gcc/binutils/libc they picked
If the later matters to you, check out crosstool-ng. This project is a configuration tool similar to the Linux kernel configuration application. Set which versions of gcc, binutils, libc (GNU or uCLibc), threading, and Linux kernel to build and crosstool-ng does the rest (i.e. downloads the tar balls, configures the tools, and builds them).
Pro: You get exactly what you selected during the configuration
Con: You get exactly what you selected during the configuration
meaning you take on full responsibility for the choice of compiler/binutil/libc and their associated features/shortcomings/bugs. Also, as mentioned in the comments, there is some "pain" involved in selecting the versions of binutils, C library etc. as not all combinations necessarily work together or even build.
One hybrid approach might be to start with the pre-built tools and replace them later with a custom solution via crosstool-ng if necessary.
Update: The answer originally used the CodeSourcery tools as an example of a pre-built tool chain. The CodeSourcery tools for ARM were free to download from Mentor Graphics, but they are now called the Sourcery CodeBench and must be purchased from Mentor Graphics. Other options now include Linaro as well as distribution specific tools from Android, Ubuntu, and others.
I use the emdebian toolchain for compiling stuff for my ARM machines that isn't happy being compiled natively in the small resources available (/me glares at the kernel). The main package is gcc-4.X-arm-linux-gnueabi (X = 1,2,3), and provides appropriately suffixed gcc/cpp/ld/etc commands. I add this to my sources.list:
deb http://www.emdebian.org/debian/ unstable main
Of course, if you're not using Debian, this probably isn't so useful, but by gum it works well for me.
I've used scratchbox while experimenting with building apps for maemo (Nokia N810), which uses an ARM processor. Supposedly, scratchbox is not restricted to maemo development.
I've used crosstool on several targets. It's great as long as you want to build your toolchain from scratch.
Of course there are several pre built toolchains for arm as well, just google it -- too many to mention here.
1) In my opinion building your own toolchain works the best. You end up having tight control over everything, plus if you're new to embedded linux, it's a GREAT learning experience.
2) Don't go with a commercial toolchain. Even if you don't want to take the time to build your own, there are free alternatives out there.
If your company will spend the money, have them buy you a jtag debugger.
It will save you tons of time. and it allows you to easily learn and step through the kernel startup, etc..
I highly recommend using the Lauterbach jtag products... They work with a ton of targets and the software is cross platform. Their support is great as well.
If you can't get a jtag debugger and you're working in the kernel, use an VM to do that, usermode linux, vmware..etc.. your code will be debugged on x86.. porting it to your arm target will be a different story, but it's a cheaper way to iron out some bugs.
If you're porting a bootloader, use uboot. Of course, if you're using a reference platform, then you're probably better off using what they provide with the BSP.
I hope that helps.
Buildroot is a tool I've had reasonably good luck with for building a custom uClibc-based toolchain from scratch. It's very customizable, and not excessively particular about what distribution you happen to be running on.
Also, many of its existing users (ie. embedded router distros) are also targeting ARM.
If you're using Gentoo, getting a cross-compiling toolchain is as easy as
$ emerge crossdev
$ crossdev -t $ARCH-$VENDOR-$OS-$LIBC
where ARCH is arm or armeb, VENDOR is unknown or softfloat, OS is linux, and LIBC is gnu or uclibc.
If all you want is a compiler (and linker) for the kernel, the LIBC part is irrelevant, and you can use -s1/--stage1 to inform crossdev that you only need binutils and gcc.
This is what Eurotech uses for their Debian ARM distibution. You'll note that they don't recommend using a cross compiler if you can avoid it. Compiling on the target itself tends to be a more reliable way of getting outputs that you know will run.

How easy is it to make a Linux distribution? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
From what I have been reading a Linux distribution is little more than a packaging of a kernel with various packages and some limited configuration details, such as which window manager and GUI to use by default (assuming you even want a GUI, blech). In the old days apparently there were some unique advantages to distributions. For example, Red Hat had Red Hat Package Manager (rpm). Of course, nowadays rpm is no longer a unique advantage of Red Hat.
So, why even bother bother with a distribution? Why not just install a kernel and bunch of packages of one's own choosing? What's the complexity?
Basically, a GNU/Linux Distro IS a kernel and a "bunch of packages" (GNU packages) of one's choosing.
People creates distros to perform specific tasks, like server, desktop distros, multimedia oriented distros, etc.
Creating a linux distro can be a really educational task, as you can get to know how a linux system is build from scratch.
I recommend you cheking LFS (Linux From Scratch). Its a project to guide you on assembling your own linux distro from scratch, and believe me, its a great fun and indeed YOU WILL LEARN A LOT.
If you'r intereseted on getting to known how a linux distro works, don't miss this.
The webpage says:
Many wonder why they should go through the hassle of building a Linux system from scratch when they could just download an existing Linux distribution. However, there are several benefits of building LFS. Consider the following:
LFS teaches people how a Linux system works internally
Building LFS teaches you about all that makes Linux tick, how things work together and depend on each other. And most importantly, how to customize it to your own tastes and needs.
Building LFS produces a very compact Linux system When you install a
regular distribution, you often end up installing a lot of programs
that you would probably never use. They're just sitting there taking
up (precious) disk space. It's not hard to get an LFS system installed
under 100 MB. Does that still sound like a lot? A few of us have been
working on creating a very small embedded LFS system. We installed a
system that was just enough to run the Apache web server; total disk
space usage was approximately 8 MB. With further stripping, that can
be brought down to 5 MB or less. Try that with a regular distribution.
LFS is extremely flexible Building LFS could be compared to a finished
house. LFS will give you the skeleton of a house, but it's up to you
to install plumbing, electrical outlets, kitchen, bath, wallpaper,
etc. You have the ability to turn it into whatever type of system you
need it to be, customized completely for you.
LFS offers you added security You will compile the entire system from
source, thus allowing you to audit everything, if you wish to do so,
and apply all the security patches you want or need to apply. You
don't have to wait for someone else to provide a new binary package
that (hopefully) fixes a security hole. Often, you never truly know
whether a security hole is fixed or not unless you do it yourself.
Of course there are other tools to create a linux distro based on your HD installation, maybe for backuping purposes.
Linux Live
And lot of other scripts to get you started, just google for them.
Of course, all of them are like automatically tools oriented for the user, so don't expect to learn a lot from them.
There are lots, thousends of linux distros out there, so obviously is a waste of time to try to make the "ideal" linux distro and compite with ubuntu, mint, etc.
I still recommend you to check out Linux From Scratch, just as a weekend educative project . Trust me, you will learn a lot.
It covers also embedded linux distro creating, to target ARM processors and so.
If you're on the embedded world, Yocto Project worths a look.

How do I build an app for an old linux distribution, and avoid the FATAL: kernel too old error?

I distribute a statically linked binary version of my application on linux. However, on systems with the 2.4 kernel, I get a segfault on startup, and the message: "FATAL: kernel too old."
How can I easily get a version up and running with a 2.4 kernel? Some of the libraries I need aren't even available on old linux distributions circa 2003. Is there an apt-get install or something that will allow me to easily target older kernels?
The easiest way is to simply install VirtualBox (or something similar, e.g. VMWare),Install CentOS 3 or any suitable old distro with a 2.4 kernel and build/test your app on that.
Since you're getting a "kernel too old", chances are you're relying on some features not present in 2.4 kernels so you'll have to trace down and rework that.
The error might simply be caused by linking statically to glibc, you could try linking to glibc dynamically and all your other libs statically, though to be backwards compatible you'd have to build your app on an old glibv system. Using the lsb tools to build could help too
For my use case, I can't statically link my supporting libraries. Also, current Linux distributions seem to make this difficult to accomplish for certain situations. But I needed my application binaries to run on 10-year old Linux systems.
I also didn't want to limit myself to an ancient 10-year old C/C++ compiler. I also found that the hardware I needed to use prevented me from installing a 10 year old Linux distribution for some reason.
So, I did this:
Installed docker.
Within a docker instance, install a 10-year old Linux system (I used Debian's Lenny distribution). This has the added advantage of making this build system available to any other machine that can run docker.
Within the docker instance, build the current GNU compilers (8.3.0 when I did this).
This gave me a modern compiler that compiled binaries that would run on very old Linux systems. I did this for both 32-bit and 64-bit processors.
From there, I created a series of scripts that allowed me to use the docker-contained cross-compiler to build all my supporting libraries. I made sure to set the rpath to my compiled binaries to a path relative to my binaries (using -Wl,-rpath,$ORIGIN/../lib), and a built a script to retrieve any supporting libraries from the compiler, using g++ -print-search-dirs to get the paths, ldd to get the supporting libraries I needed from my binaries, and some aggressive bash scripting to find the supporting libraries existing within the search-dirs from g++, dropping these libs into the rpath I set up.
From there, I package my binary accordingly, with all supporting libs.
Yeah, this is somewhat painful, but it results in a fully functioning binary capable of working on ridiculously old linux systems without having to install different Linux distributions on multiple virtual machines.
I tried creating a proper cross-compiler (native to the current Linux distribution hosting my docker images), but found it too difficult to work with, even with the best tools I could find to help me. Compiling the compiler within a docker image took far less of my time, and worked rather smoothly.

Linux distribution binary compatibility

Any way to make a binary in a Linux distribution and run it on another distribution with same architecture? Or I should compile and build it on different distributions?
Is there any compatibility between Redhat, Debian based distributions for binary files?
(I want to use my Ubuntu binary file on fedora!)
Enter Linux Standard Base to reduce the differences between individual Linux distributions.
See
http://www.linuxfoundation.org/collaborate/workgroups/lsb
http://en.wikipedia.org/wiki/Linux_Standard_Base
Statically linking your binaries makes them LESS portable because some libraries won't then work correctly for that machine (differing authentication methods, etc).
If you statically link any "unusual" libraries and keep your set of supported distros to a minimum, you should be ok.
Don't statically link the C library (or the whole binary), that's a recipe for trouble :)
Look at what (e.g.) Google do with Chrome.
What language is your application coded in? If its in a language like Python, (and no C bindings) or Java or any other VM based language, then I think you can trust the VM to make sure your application will work on the different Linux distributions.
Also, there is the Linux Standard Base which you can refer to.
HTH,Amit
I realize this is a very old question, but it comes up high in search results and this hasn't been mentioned:
CDE is a tool to create portable Linux applications. This tool packages together all needed files (including libraries) by analyzing at run-time. I have used it successfully on command-line tools several times, one example being getting tcpdump to run on a old hardware appliance running a custom distribution. CDE also doesn't require source, it just packages an executable you are able to run.
At one point I had an error running the cde command which was fixed by prepending the command with LD_ASSUME_KERNEL=2.4.1, this might not be necessary in recent versions as it was years back.
Code is also on GitHub: https://github.com/pgbovine/CDE
It works. But it also depends on the version of the shared libraries you use, including libc, libstdc++ which are forced by the compiler version that may differ from distro to distro.
The best way is to distribute the source code and to make it easy to build the source on any reasonable Linux distribution. This is better than binary distribution because it is not enough to make the binary compatible with shared libraries. You also need to make sure you adapt your program to things like distribution specified locations and conventions for where web apps go, or how e-mail is sent, or how services are started, or how to determine the default paper size, or a myriad of other details.
See for example the Debian Policy Manual for a document describing many of the things a distribution needs to decide to ensure compatibility between applications running on it. You don't need to read it through or learn it by heart, but it shows the scope of the issues that may trip you.
You should probably work together with several of the major distributions to ensure your application works well with all of them. Most distributions' developers will happily help if you approach them politely. If you're lucky, you can attract volunteers from the distros to make the binary packaging for you, and that will quickly give you feedback on what you need to change at the source level to make your application easy to package.
The Linux Standard Base already mentioned by others attempts to work out a cross-distribution solution to these variables, but it is not comprehensive and not fully supported by most distributions. However, most distributions consider it a problem if they accidentally break LSB compatibility.
Normally it's ok to use binaries across linux distributions as long as you have the same set of libraries available. You can use 'ldd' to check which libraries are needed by a binary. libc should sure have the same version in the distributions involved.
You could statically link your executables for portability.
LSB is definitely worth checking out. Though in regards of working with libraries I was most satisfied by this answer here at SO https://stackoverflow.com/questions/1209674/shipping-closed-source-application-for-linux/1242738#1242738 and this detailed treatment of the rpath mechanism http://www.eyrie.org/~eagle/notes/rpath.html
How about HTML?
It's cross platform, it's been around forever, and if you consult caniuse, and you know your target environment. It can render any UI I ever dreamt of, and if you're willing to learn javascript, you can approach this from a server programming perspective and a client programming perspective, without switching languages, which comes in handy if you're the one doing both.
It's probably the closest thing we have to a machine lingua france these days, and that is a good thing, because that means there's potentially more pickins for everybody involved.
People don't know that the web browser does most things you would want from a program, including rendering 3D graphics and PGP style encryption.
The biggest benefit I see in the browser as a platform is that everyone's nephew knows how to install a browser on a new computer, and from there it's just a url away, including packaging in some of the stores.

What is a good barebones linux distro for beginner kernel development?

In my Operating Systems class we are looking to modify a Linux kernel with some simple system calls of our own in C.
What would be a good distro suited for this purpose? We don't need any frills, no GUI, a vanilla kernel, etc. The more basic the better.
I was able to modify the kernel pretty easily using a minimal Gentoo install.
Just install gentoo, follow the installation instructions, then:
$ emerge gentoo-sources
$ emerge emacs
$ cd /usr/src/linux
In my operating systems course last semester we used User Mode Linux, the big advantage being that when you hose the system, you can simply kill the process with no risk to the host environment.
Adding/Modifying system calls is tedious but trivial regardless of the kernel you use. However the 2.6 kernel is significantly more massive and complex, so if you're going to be modifying the code in a significant way the older kernels are easier to work with and much better documented. (ie: easier to find books and references)
Happy hacking :)
archlinux++
but really.. gentoo, slack, and arch are all more-or-less good choices
Arch Linux provides a great platform for kernel development that is also very functional. If you learn to use pacman, it will actually make testing your kernel modifications quite easily and provides the sources and tools in a sane manner.
I do think that if you are serious about learning linux and kernel hacking, doing a Linux From Scratch install should be on your list. It's a great distro/book and will let you build the platform for development yourself.
On all distributions, you can install the vanilla kernel.org sources instead of the distribution-related kernel packages, which is probably a good idea anyway when you want to do kernel development.
However, you'll be in trouble when you want to use any recent distribution with non-2.6 kernels, because they often build libc6 in a way that it cannot run with 2.4. Additionally, a lot of the guts of hardware management (like udev) require fairly recent kernels.
Apart from that, using Debian gives you a barebone system, and installing your own kernels is a breeze with kernel-package.
I wouldn't necessarily say any particular distro is geared towards kernel development as such, but if you want a traditional Linux distro that doesn't pile too much custom configuration stuff between you and the kernel, Slackware is a decent choice.
My suggestion is to grab the latest kernel. There will be more debugging features inside it than in an older kernel. Also, older kernels would pretty much look just as complex as the most recent to the newbie.
As for the distribution itself, you can't really go wrong. If all you want is to try some custom system calls, then grab whatever mainstream distribution which gives you a nice development environment. Then compile and try your customized glibc without installing it over the distro's.
When choosing a distro for kernel development, remember that it's the kernel you want to hack, not the distro itself. You will therefor want an easy distro that stays out of your way as much as possible. Ubuntu says out of the way fairly nicely.
IANAKH
A non-linux alternative is Geek OS, but this is very much aimed at the educational level, and is not a practical kernel. It is ultra-simple though.
well I have found one called "minix" it isn't really a linux distro, but it was made specifically for teaching, but if you can only use a linux distro, then it shouldn't matter, I am pretty sure all distros have the same kernel
Gentoo if you dont mind automated compilation (most people think that gentoo is Linux From Scratch => you have to do everything on your own).
Arch if you have slower computer (laptop).
Biggest advantage of these two is that they have very very good documentation and only installing Gentoo f.e. gives you basic knowledge about init system and what services has to run. If one copy&paste commands from guide it's worthless though (luckily handbook makes people think a bit, thus preventing kids from installing gentoo and taking over our neat #irc) :D

Resources