Find the files related to a software - linux

I have one doubt. I am doing a project related to system restore concept in Linux. There i am planning to perform application wise rollback in case of failure. Is there any way to figure out what are all the files used by an application in the system?
Ok. I will make it a little clear. For instance consider the firefox application. When it is installed many files are written from the .deb file to folders like /etc, /usr, /opt etc. In windows all the files are installed in one folder under program files while in linux its not. So is there any way to figure out the files that belong to a software?
Thanks.

Well this can cover several things.
If you mean, which files are provided by the installation of your application ? Then the answer is, use decent package management, provide your software as an rpm/deb/... whatever package, and unstallation will take care of the rest.
If you mean, which libraries are being referenced by our application ? Then you can use ldd this will tell your which dynamic libraries are used when executing this application.
If you mean, which files is my application actively using ? Then take a look at the output of lsof (lsof = list open files) (or alternatively ls /proc//fd/), this will show all file descriptors open by your application (files, sockets, pipes, tty's, ...)
Or you could use all of the above.
One thing you can't track (unless you log this yourself) is which files have been created by your application during its lifetime.

To determine all the files installed along with the app depends on the package manager. All the ones I've dealt with (apt, pacman) have had this capability.
To determine all the files currently open by an application, use lsof.

Well, that depends ...
Most Linux system have some kind of packet management software, like aptitude in debian and ubuntu. There, you have information about what belongs to a packet. You might be able to use that information. That does not cover files created during runtime of apps though.

If you are using an RPM based distro
# rpm -Uvh --repackage pkg-1-1.i386.rpm
will repackage the old files and upgrade in a transaction so you can later rollback if something went wrong. To rollback to yesterday's state for example
# rpm -Uvh --rollback yesterday
See this article for other examples.

Related

Changing the order in which directories are searched for programs in Linux

I recently was in a situation when the software center on my Ubuntu installation was not starting. When I tried to launch it from console, I found that python was unable to find Gtk, although i hadn't removed it.
from gi.repository import Gtk,Gobject
ImportError: cannot import name Gtk
I came across a closely related question at Stackoverflow( i am unable to provide link to the question as of now).The solution accepted(and also worked for me) was to remove the duplicate installation of gtk from /usr/local as Gobject was present in this directory but not Gtk.
So, I removed it and again launched software-center and it launched.
While I am happy that the problem is solved, I would like to know if removing files from /usr/local can cause severe problems.
Also, echo $PATH onn my console gives:
/home/rahul/.local/bin:/home/rahul/.local/bin:/home/rahul/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/rahul/.local/bin:/home/rahul/.local/bin
which tells that /usr/local/bin is searched for before /usr/bin . Should $PATH be modified such that the order of lookup is reversed? If yes, then how?
You actually don't want to change that.
/usr/local is a path that, according to the Linux Filesystem Hierarchy Standard is dedicated to data specific to this host. Let's go a bit back in time: when computers were expensive, to use one you had to go to some lab, where many identical UNIX(-like) workstations were found. Since disk space was also expensive, and the machines were all identical and had more or less the same purpose (think of a university), they had /usr mounted from a remote file server (most likely through the NFS protocol), which was the only machine with big enough disks, holding all the applications that could be used from the workstation. This also allowed for ease of administration, where adding a new application or upgrading another to a newer version could just be done once on the file server, and all machines would "see" the change instantly. This is why this scheme permaned even as bigger disks become inexpensive.
Now imagine that, for any reason, a single workstation needed a different version of an application, or maybe a new application was bought with with only a few licenses and could thus be run only on selected machines: how to handle this situation? This is why /usr/local was born, so that single machines could somehow override network-wide data with local data. For this to work, of course /usr/local must point to a local partition, and things in said path must come first than things in /usr in all search paths.
Nowadays, UNIX-like Linux machines are very often stand-alone machines, so you might think this scheme no longer makes senses, but you would be wrong: modern Linux distributions have package management systems which, more or less, play the role that of the above-mentioned central file server: what if you need a different version of an application that what is available in the Ubuntu repository? You could install it manually, but if you put it in /usr, it could be overwritten by an update performed by the package management system. So you could just put it in /usr/local, as this path is usually granted not to be altered in any way by the package management system. Again, it should be clear that in this case you want anything in /usr/local to be found first than anything in /usr.
Hope you get the idea :).

How to verify if the disabled or stopped unix/linux service has a valid binary path?

I would like to find out a given disabled or stopped unix/linux service, can we verify if it has a valid binary path?
I am not sure if it necessary to check out this? I am newbie to unix/linux world.
Or is it by default all installed unix/linux services are verified with the existence of its binary and path?
For example, the installed unix/linux service, sshd.
Check its status through the command "service sshd status"
Then, check its binary path using command "which sshd".
Is there a better way to check this out?
How if I would like to checkout all stopped/disabled unix/linux services that are either stopped or disabled?
Modern GNU/Linux-Distributions allow to manage software installations using a software management system. Different such systems are in use, but they all share a common principle: a package carries within itself a description of all of its dependencies, dependencies for installation, configuration and runtime. All of these dependencies are checked prior to an installation. So unlike one forces any actions here one has the guarantee that if a package is installed, then all its requirements are fulfilled.
Looking at your example of the sshd (the openssh daemon) this means:
if you installed this package, then you can rely on the fact that all required components are installed and match in their versions and so on. IN this case the daemon control scripts and the main executable are actually contained inside the same package, this is typical for all daemon packages. So if that package is installed, then all of its content is installed.
You could manually check if all contained files of a package are actually existing. But if you really feel this has to be done, for example if you suspect some haves somehow were deleted by accident (which is actually pretty hard to do for such system files), then you can ask the software management system to verify the package integrities! For example on an openSUSE system you can say zypper verify sshd. That command verifies both: if all files mentioned in that package actually exist, are the ones originally installed (unaltered) and have correct permissions. In addition all package dependencies are checked as well. So if no error is thrown you can rely pretty much on the fact that everything is fine.
This is obviously jsut an example, different distributions use different management systems, as mentioned above. But they all offer more or less the same features. And once you understood the idea behind this approach you probably never want to miss that elegance and security any more...

How to test for services in Linux?

I've been assigned a project to write some kind of a script that will perform a sanity check on a Linux server implementation to determine if it has a number of dependencies installed before source code is deployed to it. I need to check for the presence of applications such as PHP, Nginx, PostgreSQL, etc and likely confirm version numbers for these as well. These dependencies are required for the given source code to be able to run properly on the server.
The problem is, I'm not sure how to approach this due to my novelty in working with Linux. I've done some research on this and thought that the solution might be to use a combination of combing through the list of running services with a command such as "chkconfig --list" and pinging individual applications with commands such as "php -v" and then asserting the that results from these equate to what I'm looking for.
Pardon if that makes no sense whatsoever, I really am new to this. I was then thinking I could place these "tests" inside of a shell script or something that could be run whenever a test on the server needed to be executed. I would aggregate the true/false results of my assertions and output whether the sanity check passed based on that. Any guidance would be greatly appreciated.
Thank you.
Revision: In lieu of a shell script, I was also thinking I could write this in Python. Does anybody know of any good Python libraries that allow querying of system services?
If your target systems are managed by reasonable people, the software will be managed by the packaging system. On Redhat, Fedora, CentOS or SUSE systems that will be RPM. On any system derived from Debian it will be APT.
So your script can check for one of those two packaging systems. Although be warned that you can install RPM on a Debian system so the mere presence of RPM doesn't tell you the system type. Packages can also be named differently. For example, SUSE will name things a bit differently from Redhat.
So, use uname and/or /etc/issue to determine system type. Then you can look for a particular package version with rpm -q apache or dpkg-query -s postgresql.
If the systems are managed by lunatics, the software will be hand-built and installed in /opt or /usr/local or /home/nginx and versions will be unknown. In that case good luck.

yum/ zypper for non-root installation in independent rpm database

My company is developing a Linux based software product which is shipped to different customers.
The product it self consits out of small software components which interact with each other.
What we usually ship as an update/ new release to the customer are the the current versions of the different software components e.g. compA-2.0.1, compB-3.2.3 and compC-4.1.2
Currently we employ a rather simple shell script for the installation/ upgarding process. However, we'd like to move forwarard to state of the art packaging, mainly to have an easy way of swapping different versions of components, keeping track of files and the packages they belong to and also to provide the customers with an easier interface for the update/ installation.
The software components are installed in different directories, depending on the customers demands. So it could be in /opt, /usr/local or something completely different.
Since the vast majority of our customers runs on rpm-based Linux distributions we decided for rpm-packages instead of dpkg.
In rpm terms our problem is a non-root installation. This is realativly straight forward using the following features:
own rpm database using the --dbpath option
installing in different locations using the Prefix mechanism
optional: disabling auto library dependancies using AutoReqProv: no in the rpm spec file
Using those features/ options allows us to create rpm packages which can be installed using the rpm command line tool as non-root user.
However, what we really would like to see is to install those packages via a http repository with either yum or zypper. The latter one is the tool of choice in SUSE based distributions.
The problem we see is, that non of the tools is providing the required alternative rpm database option (--dbath in rpm) and prefix support required for a non-root installation.
Does anybody have a suggestion/ idea how to deal with this issue? Is there maybe a third package-tool with we're not aware of?
Or should we maybe go a totally different route? I had a play with GNU stow and wrote some very simplistic yum-like logic around it - but then I would basically start my own package installation tool which I tried to circumvent.

How painful can a Linux to OpenSolaris migration be?

We have a business application that basically runs on an os-independent stack (tomcat+java+mysql) but we have always run it redhat or centos.
There is a customer that is insisting to run it on opensolaris for his own reasons (an expensive everything-is-included support agreement with Sun).
How painful can such a migration be? We have a lot of configuration file and support scripts such as:
apache
apache/tomcat connector
email interaction with postfix
customized service start/stop
a couple of cron jobs (backup, monitoring)
different users and permissions (java, mysql, email, backup...)
Our build process outputs a .tar.gz file with our business code + some shell scripts that edit all the os-configuration files.
Any previous experience on this.
The biggest issues will be with the non-POSIX (non-standard) options you've used to the GNU tools provided on Linux that are not in the Solaris standard commands. You might decide that porting the relevant tools from the GNU set is simpler than modifying your system. If you've laced the code with absolute pathnames for commands (/usr/bin/ls) but you decide to use the GNU versions instead, you've got to find a way of fixing those. I'd be extremely cautious about replacing the OpenSolaris versions with the GNU versions; you don't know when you would break something that the system relies on. So, you would put the GNU commands in a separate directory - probably not /usr/local because that is for the machine owners to populate, not you as an application-monger - and arrange for that to be used in place of the system commands. (Note: on Solaris, /bin is a symlink to /usr/bin; I assume the same is true of OpenSolaris.) AFAIK, Postfix is not standard on OpenSolaris, so you'd have to ensure you get that installed, too.
All of this is doable - there's nothing insuperable. But a lot depends on your code base.
We run both, though we don't use OpenSolaris as a web servers.
The good:
OpenSolaris comes with the gnu tools, so, get your path right and that's ok.
Most things just build and run just fine.
The not so good:
Make sure that you've installed and are using bash. Otherwise all those bashisms that you are using that you didn't think you were using will bite you.
Make sure that you're not using hard coded paths to /usr/bin or /bin. These tools are not the GNU ones and therefore have different options. Use /usr/gnu as mentioned above.
You don't have the huge number of packages that you can install straight off as you do with yum or apt. Yes, you have a package manager, it's just not quite so well populated.
As a result you probably will be installing packages by hand. They should install, it's just a bit more work for your system admins.
Are you sure that OpenSolaris runs well on your hardware? It's worth a check. You might find that some of the hardware drivers aren't as well tested.
Otherwise we find OpenSolaris to be nice. It has a lot of good ideas.
Have you looked at Nexenta - http://www.nexenta.org/os It's the OpenSolaris kernel with a Ubuntu userland.
OpenSolaris includes all the GNU utilities already, just point your scripts at /usr/gnu/bin
Installing Postfix shouldn't present any problems, and Apache/MySQL are present in a base OpenSolaris install (in truth, the Cool Web Stack stuff makes it about as easy to administer as WAMP/Instant Rails). Beyond which, SMF manifests (SMF is a replacement for rc scripts sort of like OSX's launchd, though you can still use regular init scripts) may make your life easier, since specifying dependencies and run order is somewhat nicer (it'll recursively start/stop all dependent services also).
Tomcat certainly works, though everybody I know on OpenSolaris uses GlassFish. YMMV, but deploying a .war is pretty much the same everywhere.
It may not be a bad first step to deploy into a LX branded zone (think FreeBSD jails or Linux vServer for a comparison), as the LX branded zones can run Linux binaries, and are explicitly CentOS/RHEL based.
Other than that, OpenSolaris is a Xen dom0 since b77 or something, and putting CentOS/RHEL into a domU is dead simple, if that's an option.
You also get all the Solaris goodies along with it (DTrace, ZFS, network virtualization [via CrossBow], etc). Who knows? You may even like it! Java is Java, so that shouldn't pose any issues.
you'll probably have to rewrite a big part of your scripts (user creations, service launch) as it is probably different in CentOS and OpenSolaris.
as previously written, ask your customer to install the GNU tools so you'll have less work to rewrite your scripts.
os configuration files may also not be in the same format, you'll need to check.
your tar.gz file should be extractable without troubles, but again you will have less surprises if you use GNU tools. some unix OS have tar with some limitations
Any previous experience on this.
(maybe a little offtopic)
we package and distribute our java/tomcat/postgresql/unix application with all binaries referenced in our scripts. this implies to have 1 build system for each OS we support, this implie we support our application but also external binaries, but in the end we do not have bad surprises # customers.
we also ask them to do all root operations (user creation, directory creation, sendmail config, system tuning) before we install the application.
we have written shutdown / startup scripts for all supported OS, and their installation is the only thing we do in root on the customer machine.
Beside the fact that you're a troll, somebody just said above that (Open)Solaris has:
- ZFS
- DTrace
We can understand that you are afraid of not losing your RHCE job, but you just proved me once again that my decision as an employer to ignore all the certifications when interviewing people was a good one. It seems that a large percentage of such people (especially in the Microsoft world) are not so... open-minded, to put it nicely.
Regards,
Alex

Resources