Changing the default directory structure - linux

If I start a new distro (e.g. LFS):
How can I change the directory structure?
What should I expect after it's ready? (probably can't install most of the packages without modification, right?
But, before you down vote: I've been asked to make a new distro for a specific project which they need (actually, want) a new directory structure with a few changes, for example remove the var and bin directories, but without halting the system. The application of this distro is so limited, so i think it shouldn't be a big deal as they need just a few packages to be installed.

These are few pointers that come to my mind and definitely it is not complete:
Your PATH should be updated in the startup scripts like ~/.bashrc, /etc/profile.d, and so on to reflect the updated directories.
Configuration files tend to use /var quite often. (/var/log, /var/tmp) You'd need to modify all these location references.
Basically your kernel is going to start /sbin/init which is going to start the initialization at /etc/rc.d or equivalent. If you start tracing all the scripts and services invoked in these startup scripts, I believe you should be able to capture all the places you'd need to modify the path names.

you can create a folder called system and the move all the files into the /system folder. and after that create symlinks, so the system can still be used. example:
su -i
cd /
mkdir system
mv /usr /system/usr
ln -s /system/usr /usr
I just did it........it broke my system XD (i think it was because i moved all the files into /system , including /boot witch is used by GRUB)
The chance of breaking the system is huge if you don't have background knowledge. Example if you move /bin to /system/bin, then you won't be able to create the symlink afterwards because the ln command (which is a program) is located in the /bin folder so moving it will end up in an error.
also, check out gobolinux.org which is based around what i just did. The entire system has been reorganized and to maintain compatibility, they have created symlinks so that they don't have to reprogram an app when porting applications.

Related

How to build custom directory structure in FreeBSD?

We need to create our own OS using freebsd source code without any additional directories like /opt, /home and few. Is there any way to remove directory structure creation from freebsd source code.
The most straightforward way is to write a post-install script that removes stuff you don't need.
You can also change the build system (Makefiles) to install things into desired locations, this is harder and will require additional work when updating FreeBSD source to newer version.

Add software bin or just add soft link for executable file in bin when install software on linux?

I’m not root for the linux server,
so I choose to install softwares in my $HOME/local/bin, I already added the $HOME/local/bin directory to the PATH environment variable, wrote in my .bashrc.
Some softwares install this way like:
tar xvzf ncurses-5.9.tar.gz
cd ncurses-5.9
./configure --prefix=$HOME/local
make
make install
cd ..
So it will directly install in my $HOME/local/bin.
But for some softwares, after download like sbt-1.2.1.zip (based on java), and decompression, shows just a file fold sbt, it contains three foldsbin conf lib, and in its bin, contains one executable file named sbt and java9-rt-export.jar sbt-launch-lib.bash sbt-launch.jar sbt.bat.
Here I wonder:
I should just soft link this executable sbt file path under my $HOME/local/bin, then source my .bashrc?
Or, after decompression, add one line in my .bashrc export PATH="downloadpath/sbt/bin:$PATH"?
Since just one executable downloadpath/sbt/bin, so I'm not sure it is right to add whole bin fold path, if software's bin fold contains executable files (one or many), I think this situation is more convenient for just add it's bin in .bashrc, but even so, I'm not sure its right?
I'm not familiar with installation software, now I usually know way
but not why. Here I shows two ways (more ways not be showed here) to
install, executable file always be written in bin or src? But some
softwares no bin just src but no executable files in it...
Slurm also can use modules to install software, conda also other way, but I want to
confirm these traditional ways I mentioned (that two) still can be
used on slurm or conda?
However, any suggestion even one aspect's reminding will be grateful!
For precompiled software, or, in general, software that does not offer configure scripts or (C)make files, it is ofter better to leave them in their target directory and adapt the *PATH (PATH to binaries, but also LD_LIBRARY_PATH, LIBRARY_PATH to libraries and CPATH to include files and MANPATH to the man page) environment variables.
The reason is that the software might be configured to read files with hardcoded paths, relative to the position of the executable, such as libraries, etc.
In your case, you might also need to setup the CLASSPATH env variable to the directory with the jar files.
To ease software installation, you can use tools such as easybuild that can help, and even create user modules just like the system module installed by the system administrators.
There is something wrong in my opinion with your setup. If you don`t have root account on your server, is not better to test what you have to test, in a more safe environment - for example a vm/container on your developement machine ?
However, in your situation maybe it can be better to start sbt by using a separate bash script than modifying your .bashrc

multiple binaries with same name in ubuntu/linux

I have recently installed a webframework play (http://www.playframework.com/) and want to have the play executable in the system path ie $PATH. But ubuntu already defines a command called play. How do I overwrite the system defined command with my framework binary path so that command play on commandline calls my framework rather than the old application.
Installation: I downloaded zipped file of the framework and upzipped in one of my personal folder which contains the docs and the executable.
Never alter the contents of installed packages. Such changes can provoke hard to find problems in the system and anyway, they will most likely be overwritten again in subsequent updates. There are other alternatives:
obviously you can chose another name for your executable
place the executable in another part of your $PATH if its a "personal installation", typically ~/bin is used for such approach. Remember that the order of entries in the $PATH variable is important, first come first serve.
use the traditional /usr/local/bin location for locally added "wild" installations, this way there is some form of clean separation between clean packages and wild installed files inside the system
store your software in some other location and prepend that to your personal or system wide $PATH variable
store your executable under another name and create an alias (see man alias for an explanation) for it which allows to call it by some name that "hides" the original command this way. For this the executable can be addressed with an absolute path, so it dies not have to be found inside the $PATH variable.
In my personal opinion options 2. and 5. and the best if it comes to "personal installations".
If you are sure you'll never use the original play command, you could just remove the binary. But in general, this isn't a good idea, since some system component you don't think of might need it, and the next update will probably restore it.
The best thing to do is to prepend the directory of your play command to the PATH, for example, using PATH=/opt/framework/bin:$PATH in your .profile (assuming your play command installs to /opt/framework/bin/play), or the script that starts your web server, or wherever you need your play command.
Remember that does not make your play command global. A common mistake is to add the path in their .profile file, then call the program from crontab - crontab scripts will not execute .profile or .bashrc.

Custom InstallAnywhere location for .com.zerog.registry.xml file on linux

I'm running into an issue where I do not have write access to the /var directory on a UNIX environment, and InstallAnywhere doesn't provide me the option of writing the .com.zerog.registry.xml to any other location for a product installation. Is there a parameter out there that allows for this file to be written to a different directory?
According to the IA docs:
If logged in as root, the global registry is located in \var.
If logged in as a user, it is located in the user’s home directory.
So, if you're running as root and can't write to /var, it sounds like a permissions problem with the /var directory, independent of IA. Check the permissions on /var.
If you're running as a non-root user, then the registry shouldn't be going to /var, but to $HOME/.com.zerog.registry.xml (FWIW, I just checked one of our test Linux boxes and found .com.zerog.registry.xml under both /var and under test-user $HOME directories. The docs appear to be correct).
I've also seen some very strange behavior if IA is low on space in $TMP. Make sure you have plenty of space there.
Also, have you considered running the installer with sudo, or the graphical equivalents kdesudo (KDE) and gksu (Gnome)? Those might get you where you want to go.

Can oprofile be made to use a directory other than /root/.oprofile?

We're trying to use oprofile to track down performance problems on a server cluster. However, the servers in question have a read-only file system, where /var/tmp is the only writeable directory.
OProfile wants to create two directories whenever it runs: /root/.oprofile and /var/lib/oprofile, but it can't, because the filesystem is read-only. I can use the --session-dir command line option to make it write its logs to elsewhere than /var/lib, but I can't find any such option to make it use some other directory than /root/.oprofile.
The filesystem is read-only because it is on nonwriteable media, not because of permissions -- ie, not even superuser can write to those directories. We can cook a new ROM image of the filesystem (which is how we installed oprofile, obviously), but there is no way for a runtime program to write to /root, whether it is superuser or not.
I tried creating a symlink in the ROM that points /root/.oprofile -> /var/tmp/oprofile, but apparently oprofile doesn't see this symlink as a directory, and fails when run:
redacted#redacted:~$ sudo opcontrol --no-vmlinux --start --session-dir=/var/tmp/oprofile/foo
mkdir: cannot create directory `/root/.oprofile': File exists
Couldn't mkdir -p /root/.oprofile
We must run our profilers on this particular system, because the performance issues we're trying to investigate don't manifest if we build and run the app on a development server. We can't just run our tests on a programmer's workstation and profile the app there, because the problem doesn't happen there.
Is there some way to configure oprofile so that it doesn't use /root ?
I guess it should be as simple as overriding the HOME environment variable:
HOME=/tmp/fakehome sudo -E opcontrol --no-vmlinux --start --session-dir=/var/tmp/oprofile/foo
If that doesn't work out, you could have a look at
unionfs
aufs
to create a writable overlay. You might even just mount tmpfs on /root,or something simple like that.
It turns out that this directory is hardcoded into the opcontrol bash script:
# location for daemon setup information
SETUP_DIR="/root/.oprofile"
SETUP_FILE="$SETUP_DIR/daemonrc"
Editing those lines seemed to get it working, more or less.

Resources