I am learning to make a simple system call from this website.
When i go to my "/usr/src" directory, I see 2 folders
1) Linux
2) Linux-Source-2.6.39.4
To which one should i make changes in order to make my system call ?
Neither. Download a fresh copy of the kernel source code, extract it into your home directory, and do your development work there using your normal user account. You only need root to install the kernel after you compile it.
The root-owned files in /usr/src are probably associated with the stock kernel that came with your system, and shouldn't be used for development. Especially since you'd have to do your development as root, just to have write permission.
Related
I inherited an old system and I am still getting a hold of how the system works. Its a custom linux running built on AT91. We build a romfs and package in a bunch of binaries that gets installed upon flashing. I have been building as a root because I that is how it was inherited and done in the past. The binaries that get packed into the romfs are listed in a Makefile with individual file privileges/permissions.
When the romfs gets flashed and all the binaries get installed they have the permissions set to root:root. I understand romfs-inst.sh script does set some permissions through the many options it provides and those are being set right.
I need to be able to install the binaries are someuser:somegroup and I am sure there is no chown kind of option in the romfs-inst.sh. How else do I change the owner and group of the binaries.
Any help is appreciated Thanks
romfs doesn't have any way of storing the owner of a file. (It doesn't even preserve permissions other than the execute bit.)
If you need these features, you should probably take a look at replacing romfs with cramfs or squashfs. Along with supporting ownership and permissions, these filesystems can also compress files, letting you store more data in the same amount of memory.
After make of sources I have compiled executable file and data directory with images for it. What should I do at "make install" phase to correctly install these files to the linux system? And how then application can find installed data (in case when binary and data are placed in different directories)?
Are there any standards for this?
There are many ways to install packages on a Linux and Unix system much like any other operating system. The normal method of installing software is through your distributions package manager. Package managers are different based on the distribution you are using but in general they take a package (a file filled with binaries source code or other files required for the piece of software to work) and place it into the corresponding places as defined by the Filesystem Hierarchy Standard. When you do a make install what you are doing is bypassing the package manager and placing the binaries into the hierarchy standard directly making it nearly impossible for the package manager to handle or account for that programs existence. This is not a good thing for anyone as it is hard to keep a system secure or stable with many unknown files placed throughout the system. Please if you want to install something manually please take a look at the filesystem hierarchy stabdard and place the files under the appropriate folder in either /opt and create a symlink in an area covered by your PATH variable or under /usr/local/
I have a standalone server running Cygwin -- I did not setup this server, it was inherited. Anyway, I'd like to know what options the installing admin selected in the setup program.
I've read that I could look in /etc/setup, /etc/postinstall, or /etc/preremove but there are a lot of packages in those directories... same goes for the output of cygcheck -c.
I don't want to know every single library on the system... just how to duplicate the install. Is there a way to determine which packages were select in the GUI setup program?
Thanks!
Cygwin is pretty standalone. You should be able to archive up the entire Cygwin directory (and subdirectories) and move it to the same location on another system.
If you archive it up I recommend 7-zip. You can get it free here. The built in Windows archiver can create permission problems when an archive is extracted on a destination system. I recommend 7-zip for both archiving and unarchiving. If you use the built in Windows archiver and then move it to the new system and extract it - it will extract without errors. However you may find things don't actually work right while using some Cygwin applications
If you don't copy everything you won't move any of the original admin's custom changes.
Is it possible to install and run applications using the regular filesystem but make created files and changes written to a specific directory?
I want to make an application believe it is installed to the system root and remove it by just deleting one folder from my home directory. A lightweight solution would be great!
It should be possible by combining unionfs and namespace. Create a mount namespace (using unshare(1)), mount a unionfs over everything and run the application there (I haven't done it myself, so no example commands, sorry).
Take a look at mbox http://pdos.csail.mit.edu/mbox/
It intercepts system calls to a temporary directory which you can specify
I have written an open source (GPL) application for Linux and OSX and now wish to distribute it. Is it normal to distribute the source code along with the binaries by default, or just provide a link to where it can be obtained?
If I include the source files, where is the normal location for writing them on the users system for Linux and OSX (I thought /usr/local/src but on my Ubuntu machine, supposedly chock-full of open source apps, this directory is empty).
It is usual to distribute the sources and binaries separately. Binaries would normally be distributed in distro-specific package formats whilst sources would be a simple .tar.gz containing a project folder. The user could unpack it to /usr/local/src if they wanted but it should build anywhere. It's not up to your program to drop its sources in any particular location.
I thought /usr/local/src but on my Ubuntu machine, supposedly chock-full of open source apps, this directory is empty
It will be empty if you are only using the Ubuntu repos. The OS is in charge of /usr and will drop any sources you install into /usr/src. But /usr/local is left for you to play with; that's where you install stuff that the distro doesn't provide.
About /usr/local/src
/usr/local and any subdirectories are always going to be empty on your machine unless YOU have specifically put something in there. It's a section of the filesystem that is reserved for user-installed software for that specific machine. Ubuntu (or any distribution) is not ever supposed to touch it.
Your distro will have separate places for its own source code, if any. Most Ubuntu installations won't need source code anyway (though you can download it if you want to), but if they do it'll go somewhere like /usr/src. But if you want to place your own source code somewhere and don't want your distro to mess with it, then just:
If it's just for developing/compiling in your own user account, you can just put it somewhere in your home directory.
If it's a piece of software you'll installing on the system, /usr/local/src is the suggested spot and your distro won't mess with it there.
FHS is the standard which says where in the filesystems things go, and includes distinctions such as the ones I've discussed above.
Your software should be able to be compiled no matter which directory it's in, because as you can see, it can depend.
It's worth looking at a few projects on Sourceforge (http://www.sf.net). As menioned by #bobince it's normal to distribute binaries and source separately. It's certainly kind to users not to require compilation so they can download and run.