This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 11 years ago.
I am taking an undergrad operating systems class next semester and this is a recommended book. Iam wondering if you would still recommend Advanced Programming in the Unix Environment 1st Edition as opposed to the second edition. I know you cannot recommend a book for a class you have not taken(not what I am asking for) but am wondering if anyone has read/owns both versions and whether or not they feel the 1st edition is still relevant or due to its age(written in 1992) I would be better off investing in the 2nd edition. I don't know a ton about unix and after taking a look at the 1st edition it seems like its a wealth of info let me know what you think
From the book's web site:
The second edition of Advanced Programming in the UNIX® Environment has been updated to reflect contemporary operating systems and recent changes in standards. In addition, the example chapters were overhauled. The four platforms used to test the examples in the book include FreeBSD 5.2.1, Linux 2.4.22, Mac OS X 10.3, and Solaris 9. These platforms are a moving target, and most likely there are newer versions available now, so your mileage may vary.
Major changes include the addition of a chapter on sockets, two chapters on threads, and the removal of the chapter discussing modem communication, although this lost chapter is available here. Additionally, the printer communication chapter was rewritten to account for today's network-based printers.
To my mind, the most valuable of these changes is the testing with modern platforms. APUE 1/e barely mentioned Linux and of course didn't cover OS X at all since it hadn't been created yet. 2/e fixes this.
That's not to say that APUE 1/e is useless for Linux and OS X systems programming. I used it successfully with Linux for many years. I can't think of any time a topic it covered didn't implicitly cover at least one way to do it on Linux. The main difficulty is that where there is more than one way to do something, APUE usually gives them all, but with 1/e you had to just try them all to find out which one Linux supported. It's a worse problem with OS X, because its kernel is less ecumenical than Linux's.
I don't miss the chapters on threads and sockets in my 1/e copy because I have other books for that. As a new systems programmer, you will find them valuable until you find a reason to get something more comprehensive in those areas. They're both topics worthy of full books. (Full shelves, really.)
Anyway, bottom line, I still have my 1/e copy despite buying 2/e for work. The 1/e copy just went home is all. It's still useful.
It's a good book, and the first edition is not very out of date. Much of the point of Unix is to limit how much the features and interfaces change over time. The older version of the book is still very valid, and the fact that there are only two editions in nineteen years speaks to the stability of the unix libraries and utilities. Of course, your professor should be able to explain and differences you might encounter.
Related
I am a non CS/IT student, but having knowledge of C, Java, DS and Algorithms. Now-a-days I am focusing on operating system and had gained some of its concepts. But I want some practical knowledge of it. Merely writing algo code in java/c has no fun in doing. I have gone through many articles where they mentioned we can customize source code of Linux-kernel.
I want to start customizing the kernel as I move ahead in the learning of OS concepts and apply the same. It will make two goals achievable 1. I will gain practical idea of the operating system 2. I will have a project.
Problem which I face-
1. From where to get the source code? Which source code should I download? Also the documentation if possible.
https://www.kernel.org/
I went in there but there are so many of them which one will be better?
2. How will I customize the code once I have it?
Please give me suggestions with detail about how I should start this journey (of changing source code to customize Linux).
Moreover I am using Windows 8.
I recommend first reading several books on OSes and on programming. You need a broad CS culture (if possible get a CS degree)
I am a non CS/IT student,
You'll better become one, or else spend years of work to learn all the stuff a CS graduate student has learnt.
First, you need to be very familiar with Linux programming on user side (application programs). So read at least Advanced Linux Programming and study the source code of several programs, including shells (and some kind of servers). Read also carefully syscalls(2). Explore the state of your kernel (e.g. thru proc(5)...). Look into https://kernelnewbies.org/
I also recommend learning several programming languages. You should in particular read SICP, an excellent introduction to programming. Read also some book like programming language pragmatics. Read something about continuation and continuation passing style. Read the Dragon book. Read some Introduction to Algorithms. Read something about computer architecture and instruction set architecture
Merely writing algo code in java/c has no fun in doing.
But the kernel is also written in C (mostly) and full of algorithmic code. What makes you think you'll get more fun in it?
I want to start customizing the kernel as I move ahead in the learning of OS concepts and apply the same.
But why? Why don't you also consider studying and contributing to some user-level code
I would recommend first reading a good book on OSes in general, notably Operating Systems: Three Easy Pieces. Look also on OSdev.
At last, the general advice about kernel programming is don't. A common mistake is to try adding code inside the kernel to solve some issue that can and should be solved in user-land.
How will I customize the code once I have it?
You probably should not customize the kernel, but if you did you'll use familiar tools (a good source code editor like emacs or vim, a compiler and linker on the command line, a build automation tool like make). Patching the kernel is similar to patching some other free software. But testing your kernel is harder (because you'll often reboot).
You'll also find several books explaining the Linux kernel.
If you still want to customize the kernel you should first try to code some kernel module.
Moreover I am using Windows 8.
This is a huge mistake. You first need to be an advanced Linux user. So wipe out Windows from your computer, and install some Linux distribution -I recommend Debian- (and use only Linux, no more Windows). Become familiar with command line.
I seriously recommend to avoid working on the kernel as your first project.
I strongly recommend looking at some existing user-land free software project first (there are thousands of them, notably on github, e.g. choose some package in your distribution, study its source code, work on it, propose the patch to the community). Be able to build from source code a lot of things.
A wise man once said you "must act your way into right thinking, as you cannot think your way into right acting". In your case, you'll need to act as an experienced programmer would act, which means before we write any code, we need to answer some questions.
What do we want to change?
Why do we want to change it?
What are the repercussions of this change (ie what other functions - out of all the 10's of millions of lines of source code - call this function)?
After we've made the change, how are we going to compile it? In other words, there is a defined process for this. What is it?
After we compile our new kernel/module, how are we going to test it?
A good start, in addition to the answer that was just posted, would be to run LFS (Linux from Scratch). Get a successful install of that and use it as a starting point.
Now, since we're experienced programmers, we know that tinkering with a 10M+ line codebase is a recipe for trouble; we need a bit more direction than that. Here's a list of bugs that need to be fixed: https://bugzilla.kernel.org/buglist.cgi?chfield=%5BBug%20creation%5D&chfieldfrom=7d
I, for one, would be glad to see the one called "AUFS hangs on fanotify" go away, as I use AUFS with Docker on a daily basis.
If, down the line, you decide you'd rather hack on something besides the kernel, there are plenty of other options.
From your question it follows that you've already gained some concepts of an operating system. However, if you feel that it's still insufficient, it is OK to spend more time on learning. An operating system (mainly, a kernel) has certain tasks to perform like memory management (or memory protection), multiprogramming, hardware abstraction and so on. Neither of the topics may be neglected - they are all as important. So, if you have some time, you may refer to such useful books as "Modern Operating Systems" by Andrew Tanenbaum. Special books like that will shed much light on all important aspects of a modern OS. Suffice it to say, Linux kernel itself was started by Linus Torvalds because of a strong inspiration by MINIX - an educational project by A. Tanenbaum.
Such a cumbersome project like an OS kernel (BSD, Linux, etc.) contains lots of code. Many people are collaborating to write or enhance whatever parts of the kernel. So, there is a common and inevitable need to use a version control system. So, if you have an intention to submit your code to the kernel in future, you also have to have hands on with version control. Particularly, Linux relies on Git SCM (software configuration management - a synonym for version control).
So, once you have some knowledge of Git, you can install it on your computer and download Linux source code: git clone https://github.com/torvalds/linux.git
Determine your goals at Linux kernel modification. What do you want to achieve? Perhaps, you have a network card which you suspect to miss some features in Linux? Take a look at the other vendors' drivers and make an attempt to fix the driver of interest to include the features. Of course, this will require some knowledge of the HW, and, if the features are HW dependent, you will unlikely succeed to elaborate your code without special knowledge. But, in general, - if you are trying to make an enhancement, it assumes that you are an experienced Linux user yourself. Otherwise, how will you understand that some fixes/enhancements/etc. are required? So, I can't help but agree with the proposal to postpone Windows 8 for a while and start using some Linux distribution (eg. Debian).
If you succeed to determine your goals (eg. if you find a paper describing some desired changes in Linux kernel or if you decide to enhance some device drivers / write your own), you will be able to try it hands on. However, you still might need some helpful books, but, in this case, some Linux-specific ones. Also, writing C code for the kernel itself will require one important detail - you will need to comply with a so called coding standard, otherwise Linux kernel maintainers will not be able to accept your patches.
So, I made an attempt to outline some tips based on your current question. Of course, the job of kernel development has far more broad prerequisites, but these are which are just obvious.
I would need your help. I've come across an interesting book - Programming the Cell Processor: For Games, Graphics, and Computation - it contains mostly C and some Assembly for Cell. The technology is interesting indeed, but there are some doubts on my side.
The book is from 2008 and some things has changed:
There is no Linux support on current firmware version.
Last version on IBM's website is from 2008 Red Hat Enterprise 5.2 and Fedora 9 - has anyone an experience running this IBM SDK on Fedora 13 or at least any version higher than stated Fedora 9, and is Linux available of sufficient testing?
Would it be useful for example for creation of distributable PSN game, and if anyone knows anything about price to actualy get a product there (as I've heard that it waaaaay more expensive than for example X-box indie games)
So do you think that it is worth it or not? Be it just for education purposes or something "more" serious?
Any thoughts are welcomed, thank you!
Cell was dumped by IBM for general purpose computers. It will live for the next 5 years in the Playstation and i'm pretty sure that the next generation Playstation - whenever it will be ready - will also use Cell again because establishing something new in CPU land is so unaffordable today.
But as a technolgy it is indeed no longer interested. Learning CUDA might be a better choice.
Given that you don't have access to a Cell machine, I'd advise that it's probably not worth it. I absolutely love the Cell architecture - I think it was a fantastic step in the right direction. Unfortunately, having done some Cell development in the past, I was really disappointed with the tool chain, the simulator and the seemingly hostile attitude taken towards developers recently.
So given that you're not going to be able to use a real Cell machine in order to get the speed gains you would get from writing programs within that idiom, you'd probably be much better off looking into general distributed programming techniques (using MPI or something similar). These skills are going to be readily transferrable to the Cell or its derivatives, or any similar architectures that might arise in the future.
As far as I'm concerned, and as much as it pains me, I think the Cell is basically a developmental dead end unless you have access to a commercial development license, you'll be extremely frustrated in your ability to actually get anything out of the architecture.
I work on a project that's distributed for free in both source and binary form, since many of our users need to compile it specifically for their system. The necessitates a degree of consideration in maintaining backwards compatibility with older host systems, and primarily their compilers.
Some of the cruftiest of these, such as GCC 3.2 (2003!), ICC 9, MSVC (almost abandonware, not C++!) and Sun's compiler (in some old version that we still care about), lack support for language features that would make development much easier. There are definitely also cases where enabling users to stick with these compilers costs them a lot of performance, which runs counter to the goals of what we're providing.
So, at what point do we say enough is enough? I can see several arguments for ceasing to support a particular compiler:
Poor performance of generated code (relative to newer versions, asked about here)
Lack of support of language features
Poor availability on development systems (more for the proprietary than GCC, but there are sysadmin issues with getting old GCC, too)
Possibility of unfixed bugs (we've isolated ICEs in ICC and xlC, what else might be lurking?)
I'm sure I've missed some others, and I'm not sure how to weight them. So, what arguments have I missed? What other technical considerations come into play?
Note: This question was previously more broadly phrased, leading many respondents to point out that the decision-making is fundamentally a business process, not an engineering process. I'm aware of the 'business' considerations, but that's not what I'm looking for more of here. I want to hear experiences from people who've had to support older compilers, or made the choice to drop them, and how that's affected their development.
Your question is conceptually the same as web developers who want to know when they should stop supporting Internet Explorer 6. The answer is that you have to do research.
How many people use the older compilers?
How many use the newer ones?
How many will be willing to upgrade?
How many users will you lose? (This can be calculated from the answers to 1, 2, and 3).
How much time and work would it save you to drop support for the older compilers?
Basically your decision comes down to comparing the answers to 4 and 5. It seems like this is an open source project from your description, but if it's a business, you can compare it numerically (if money lost is less than money saved, drop support). If it's not a business, it's a bit more complicated, as you have to guess the human cost, which can be a bit tricky.
Well, the usual way to go about this is first to ask. I assume you have a mailing list of a webpage or something to facilitate that. So ask: Who will be affected and how hard would it be to upgrade if we drop support for any of these compilers. After doing so, you'll get an idea if it is worth the hassle to keep supporting these compilers.
It might also be kind to flag the last working version for each compiler-version you decide to drop support for, so that anyone who really cares can keep on using that old version.
I don't think it is particularly anything to do the efficacy of old compiler tech. It's a business decision, and really boils down whether you want to keep your customers or lose them. Customers don't deal in tech, they deal in business and business decisions.
Ideally you want to define some kind of metric that constructed on how many customers you
have, against the different compiler versions they are used, against the cost of
maintaining particular versions of each compiler type.
Fundamentally, you really need to be careful when and how your going to tell your customer
base that your going to retire part of your product set. How you tell them as well. Just
drop it in their lap. Plan it.
You need a internal approved controlled policy, and start rolling it out, perhaps telling
them at user group meetings, and then ensure you have decent length of time (2 years is
good, allow the customer to complete current implementations (1 years) plus some slack,
before you start implementing in, and have a support framework in place, to help customer
migrate in time.
How you plan this will define how your customers react. A few years go, I was working in
software house, which sold a really complex high end product for controlling electricity networks. The product sell £2m for the complete package, and each customer signed for
a 25 year support contract. Somehow we decided to rationalise hardware. We were
offering it on AIX, Solaris, Tru64 and HPUX. But for reason we decided to rationalise it
on AIX, which I think we had a deal. Anyway, one of the customers which was a Solaris shop
got really upset about this, and then for the next 4 years we never heard a word from them.
No phone calls, patched, on site audits. Nothing.
The reason we decided to change it, as we did a 6 sigma project, and it indicated we
would save about £19m a year, buy rationalising the infrastructure to AIX and NT. But in
the end up, we ended up fxxking off one of our primary customers, virtually destroying our user group community.
The decision was made hastily, and it backfired. So I think your best idea is to plan it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been coding and managing Java & ASP.Net applications & servers for my entire career. Now I'm being directed towards involvement in mainframes, ie z/OS & JCL, and I'm finding it difficult to wrap my head around it (they still talk about punch cards!). What's the best way to go about learning all this after having been completely spoilt by modern luxuries?
There are no punch cards in modern mainframes, they're just having you on.
You will have a hard time since there are still many things done the "old" way.
Data sets are still allocated with properties such as fixed-block-80, variable-block-255 and so on. Plan your file contents.
No directories. There are levels of hierarchy and they're limited to 8 characters each.
The user interface is ISPF, a green-screen text-mode user interface from the seventh circle of hell for those who aren't used to it.
Most jobs will still be submitted as batch jobs and you will have to monitor their progress with SDSF (sort of task manager).
That's some of the bad news, here's the good news:
It has a USS subsystem (UNIX) so you can use those tools. It's remarkably well integrated with z/OS. It runs Java, it runs Websphere, it runs DB2 (proper DB2, not that little Linux/UNIX/Windows one), it runs MQ, etc, etc. Many shops will also run z/VM, a hypervisor, under which they will run many LPARs (logical partitions), including z/OS itself (multiple copies, sometimes) and zLinux (SLES/RHEL).
The mainframe is in no danger of disappearing anytime soon. There is still a large amount of work being done at the various IBM labs around the world and the 64-bit OS (z/OS, was MVS, was OS/390, ...) has come a long way. In fact, there's a bit of a career opportunity as all the oldies that know about it are at or above 55 years of age, so expect a huge suction up the corporate ladder if you position yourself correctly.
It's still used in the big corporations as it's the only thing that can be trusted with their transactions - the z in System z means zero downtime and that's not just marketing hype. The power of the mainframe lies not in it's CPU grunt (the individual processors aren't that powerful but they come in books of 54 CPUs with hot backups, and you can run many books in a single System z box) but in the fact that all the CPU does is process instructions.
Everything else is offloaded to specialist processors, zIIPs for DB2, zAAPs for Java workloads, other devices for I/O (and I/O is where the mainframe kills every other system, using fibre optics and very large disk arrays). I wouldn't use it for protein folding or genome sequencing but it's ideal for where it's targeted, massively insane levels of transaction processing.
As I stated, z/OS has a UNIX subsystem and z/VM can run multiple copies of z/OS and other operating systems - I've seen a single z800 box running tens of thousands of instances of RHEL concurrently. This puts all the PC manufacturers 'green' claims to shame and communications between the instances is blindingly fast with HyperSockets (TCP/IP but using shared memory rather than across slow network cables (yes, even Gigabit Ethernet crawls compared to HyperSockets (and sorry for the nested parentheses :-))).
It runs Websphere Application Server and Java quite well in the Unix space while still allowing all the legacy (heritage?) stuff to run as well. In fact, mainframe shops need not buy PC-based servers at all, they just plonk down a few zLinux VMs and run everything on the one box.
And recently, there's talk about that IBM may be providing xSeries (i.e., PCs) plugin devices for their mainframes as well. While most mainframe people would consider that a wart on the side of their beautiful box, it does open up a lot of possibilities for third-party vendors. I'm not sure they'll ever be able to run 50,000 Windows instances but that's the sort of thing they seem to be aiming for (one ring to rule them all?).
If you're interested, there's a System z emulator called Hercules which I've seen running at 23 MIPS on a Windows box and it runs the last legally-usable MVS 3.8j fast enough to get a feel. Just keep in mind that MVS 3.8j is to z/OS 1.10 as CP/M is to Windows XP.
To provide a shameless plug for a book one of my friends at work has written, check out What On Earth is a Mainframe? by David Stephens (ISBN-13 = 978-1409225355). I found this invaluable since I came from a PC/UNIX background, and it is quite a paradigm shift. I think this book would be ideal for your particular question. I think chunks of it are available on Google Books so you can try before you buy.
Regarding JCL, there is a school of thought that only one JCL file has ever been written and all the others were cut'n'paste jobs on that. Having seen the contents of them, I can understand this. Programs like IEBGENER and IEFBR14 make Unix look, if not verbose, at least understandable.
You first misconception is beleiving the "L" in JCL. JCL isnt a programming language its really a static declaration of how a program should run and what files etc. it should use.
In this way it is much like (though superior to) the xml config spahetti that is used to control such "modern" software as spring, hebernate and ant.
If you think of it in these terms all will become clear.
Mainframe culture is driven by two seemingky incompatable obsessions.
Backward compatability. You can still run executables written and compiled in 1970. forty year old JCLs and scripts still run and work!
Bleeding edge performance. You can have 128 cpus on four machines in two datacentres working on a single DB2 query. It will run the latest J2EE (Websphere) applications faster than any other machine.
If you ever get involved with CICS (mainframe transaction server) on Z/OS, I would recommend the book "Designing and Programming CICS applications".
It is very useful.
alt text http://img18.imageshack.us/img18/7031/designingandprogramming.gif
If you are going to be involved with traditional legacy applications development, read books by Steve Eckols. They are pretty good. You need to compare the terms from open systems to mainframe which will cut down your learning time. Couple of examples
Files are called Datasets on mainframe
JCL is more like a shell script
sub programs/routines or like common functions etc...Good luck...
The more hand holding at the beginning the better. I've done work on a mainframe as an intern and it wasn't easy even though I had a fairly strong UNIX background. I recommend asking someone who works in the mainframe department to spend a day or two teaching you the basics. IBM training may be helpful as well but I don’t have any experience with it so can’t guarantee it will. I’ve put my story about learning how to use the mainframe below for some context. It was decided that all the interns were going to learn how to use the mainframe as a summer project that would take 20% of there time. It was a complete disaster since all the interns accept me were working in non mainframe areas and had no one they could yell over the cube wall to for help. The ISPF and JCL environment was to alien for them to get proficient with quickly. The only success they had was basic programming under USS since it’s basically UNIX and college familiarized them with this. I had better luck for two reasons. One I worked in a group of about 20 mainframe programmers so was able to have someone sit down with me on a regular basis to help me figure out JCL, submitting jobs, etc. Second I used Rational Developer for System z when it was named WebSphere Developer for System z. This gave me a mostly usable GUI that let me perform most tasks such as submitting jobs, editing datasets, allocating datasets, debugging programs, etc. Although it wasn’t polished it was usable enough and meant I didn’t have to learn ISPF. The fact that I had an Eclipsed based IDE to do basic mainframe tasks decreased the learning curve significantly and meant I only had to learn new technologies such as JCL not an entirely new environment. As a further note I now use ISPF since the software needed to allow Rational to run on the mainframe wasn’t installed on one of the production systems I used so ISPF was the only choice. I now find that ISPF is faster then Rational Developer and I’m more efficient with it. This is only because I was able to learn the underlying technology such as JCL with Rational and the ISPF interface at a later date. If I had to learn both at once it would have been much harder and required more one on one instruction.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I'd like to gain better knowledge of operating system internals. Process management, memory management, and stuff like that.
I was thinking of learning by getting to know either linux or BSD kernel.
Which one kernel is better for learning purposes?
What's the best place to start?
Can you recommend any good books?
In college, I had an operating systems class where we used a book by Tanenbaum. In the class, we implemented a device driver in the Minix operating system. It was a lot of fun, and we learned a lot.
One thing to note though, if you pick Minix, it is designed for learning. It is a microkernel, while Linux and BSD are a monolithic kernel, so what you learn may not be 100% translatable to be able to work with Linux or BSD, but you can still gain a lot out of it, without having to process quite as much information.
As a side note, if you've read Just for Fun, Linus actually was playing with Minix before he wrote Linux, but it just wasn't enough for his purposes.
As a Linux user I'd say Linux has a great community for people to learn about the kernel. http://kernelnewbies.org is a great place to start asking questions and learning about how the kernel works. I can't make a book reccomendation, but once you've read the starting material on kernelnewbies the source is very well documented.
Aside from the good books already mentioned (Opeating System Design & Implementation is particularly good), get a hold of a 1.x release Linux Kernel, load it into VMWare or VirtualBox and start playing around from there.
You will need to spend a lot of time browsing source code. For this, check out http://lxr.linux.no/ which is a browsable linked version of the source and makes life a lot easier. For the very first version of Linux (0.01) check out http://lxr.linux.no/linux-old+v0.01/. The fun begins at http://lxr.linux.no/linux-old+v0.01/boot/boot.s. As you progress from version to version, check out the ChangeLog and dig into those parts that have changed to save you re-reading the whole thing again.
Once you've gotten a hold of the concepts, look at 2.0, then 2.2, etc. Be prepared to sink A LOT of time into the process.
Linux
Device Drivers
Linux Core Kernel Commentary
Operating Systems Design and Implementation
I had previously bought these books on recommendation for the same purpose but I never got to studying them myself so only take them as second-hand advice.
I recommend you the BSD kernels! BSD kernels have far fewer hackers so following their evolution is easier. Either BSD and Linux kernels have great hackers, but some people argue that BSD lower fame filters out novice ones. Also taking design decisions is easier when the sources are not being updated 100 times a day.
Among the BSD choices, my favorite one is NetBSD. It might not be the pain-free choice you want for your desktop, but because it has a strong focus on portability, the quality is quite good. I think this part say it all:
Some systems seem to have the philosophy of “If it works, it's right”. In that light NetBSD's philosophy could be described as “It doesn't work unless it's right”
If you have been working long enough, you will know that NetBSD is a quite joy for learning good coding. Although professionally you will find more chances with Linux
Whichever choice you take, start joining their mail lists, follow the discussions. Study some patches and finally try to do your own bug-fixing. Regarding books, search for Diomidis Spinellis articles and his book. It is not exactly a kernel book, but has NetBSD examples and helps a lot to tackle large software.
Noting the lack of BSDs here, I figured I'd chip in:
The Design and Implementation of the FreeBSD Operating System (dead-tree book)
Unix and BSD Courses (courses and videos)
FreeBSD Architecture Handbook (online book)
I haven't taken any of the courses myself, but I've heard Marshall Kirk McKusick speak on other occasions, and he is really good at what he does.
And of course the BSD man pages, which are an excellent resource as they are maintained to a far greater extent than your average Linux man-page. Take for instance the uvm(9) man-page, describing the virtual memory interface in OpenBSD.
Not quite related, but I'll also recommend the video History of the Berkeley Software Distributions as it gives a nice introduction to the BSD parts of the UNIX history and culture as well as plenty of hilarious anectodes from back when.
There's no substitute for diving into the code. Try to find a driver or subsystem that you're interested in and poke around with it. With tools like VMware Workstation it's super easy to make whatever changes you want, snapshot the VM, and run your modified kernel. If the kernel panics on boot, who cares? Just jump back to the snapshot and fix the problem.
For books, I strongly recommend Linux Kernel Development by Robert Love. It's a wonderfully written book -- lots of information, organized sanely, and humorous... not dry reading at all.
Take Mike Stone's advice and start with Minix. That's what Linus did! The textbook is really well written, and Tannenbaum does a great job of showing how the various features are implemented in a real system.
Nobody seems to have mentioned that code-wise BSD is much cleaner and more consistent. The documentation's way better too (as already mentioned). But since there's a whole lot of fiddling with whatever system you choose - I'd pick the one you use more often.
Linux and Minix are fun to learn. If you also want to learn how a modern micro-kernel operating system looks like, you can look at QNX. The complete documentation is available online and it is very accessible. For example, this online book.
When I was at uni I spent a semester studying operating systems, and as part of this had an assignment where we had to implement a RAM-based filesystem in Linux.
It was a fantastic way to get to understand the internals of the Linux keurnel and to get a grasp on how everything fits together - And a heck of a lot of fun playing around with how it interacts with standard tools too.
I haven't tried it myself, but you can go to Linux From Scratch and start building your own Linux distribution. Sounds like something that'll take a junkload of time, but will result in an intimate knowledge of the guts of the Linux kernel and how each part works. Of course, you can supplement this learning by following any of the other tips here.