How to replace file from existing RPM package? [duplicate] - linux

I have several applications that I wish to deploy using rpm. Some of the files in my application deployments override files from other deployed packages. Simply including the new files in the deployment package will cause rpm conflicts.
I am looking for the proper way to use rpm to update/replace already installed files.
I have already come up with a few solutions but nothing seems quite right.
Maintain custom versions of the rpms containing the original files.
This seems like a large amount of work for a relatively small reward even though it feels less like a hack than some of the other possible solutions.
Include the files in the rpm with another name and copy them over in the post section.
This would work but will mean littering the system with multiple copies of the files. Also it means additional maintenance in the rpm build spec for each file.
Use wget in the post section to replace the original files from some known server.
This is similar to the copy technique but the files wouldn't even live in the rpm. This might act like a nice central configuration authority though.
Deploy the files as new files, then use symlinks to override the originals.
This is also similar to the copy technique but with less clutter. The problem here is that some files don't behave well as symlinks.

To the best of my knowledge, RPM is not designed to permit updating / replacing existing files, so anything that you do is going to be a hack.
Of the options you list, I'd choose #1 as the least bad hack if the target systems are systems that I admin (as you say, it's more work but is the cleanest solution) and a combination of #2 and #4 (symlinks where possible, copies where not) if I'm creating the RPMs for others' systems (to avoid having to distribute a bunch of RPMs, but I'd make it very clear in the docs what I'm doing).
You haven't described which files need to be updated or replaced and how they need to be updated. Depending on the answers to those questions, you may have a couple of other options:
Many programs are designed to use a single default configuration file and also to grab configuration files from a .d subdirectory. For example, Apache uses /etc/httpd/conf/httpd.conf and /etc/httpd/conf.d/*.conf, so your RPMs could drop files under /etc/httpd/conf.d instead of modifying /etc/httpd/conf/httpd.conf. And if the files that you need to modify are config files that don't follow this pattern but could be made to, you can suggest to the package maintainers that they add this capability; this wouldn't help you immediately but would make future releases easier.
For command-line utilities like sendmail and lpr that can be provided by multiple packages, the alternatives system (see man alternatives) permits more than 1 RPM that provides these utilities to be installed side by side. Again, if the files that you need to modify are command-line utilities that don't follow this pattern but could be made to, you can suggest to the package maintainers that they add this capability.
Config file changes on systems that you administer are better managed through a tool like Cfengine or Puppet rather than through custom RPMs. I think that Red Hat favors Puppet.
If I were creating the RPMs for systems I don't administer, I'd consider using a third-party tool like Bitrock and dumping all of my stuff under /opt just so I wouldn't have to stomp on files installed by other admins' RPMs.
Edit (2019): Nowadays, Software Collections offers a useful alternative. You can create packages that install somewhere under /opt, and the Software Collections tools offer a standardized way for users to opt in to using those instead of whatever's normally installed under /usr. Red Hat uses this to distribute newer versions of tools for their otherwise stable and long-lived (i.e., older) Red Hat Enterprise Linux distributions.

You can also execute rpm -U --replacefiles --replacepkgs ..., which will give you what you want.

See here for more info on RPM %files directives:
http://www.rpm.org/max-rpm/s1-rpm-inside-files-list-directives.html
You can use the arguments from the %post and %pre sections in the RPM scriptlets to determine if you are installing, upgrading or removing packages.
If $1 is 0 - then we're removing old stuff. Targeting 0 packages installed.
If $1 is 1 - then we're installing new stuff. Targeting a total of 1 package to be installed.
If $1 is 2 or more - then we're upgrading this package and $1 represents the number of packages already installed.
These sections help with managing files among the versions.
Keep track of what you're doing between versions and consider what one might do if they were to skip a version or two.
Have consideration for these things and you should be good to go!

Related

How to check if our system has been exploited by log4j vulnerability?

Recently there was a log4j vulnerability reported:
https://nvd.nist.gov/vuln/detail/CVE-2021-44228
https://www.randori.com/blog/cve-2021-44228/
https://www.lunasec.io/docs/blog/log4j-zero-day/
How do I know exactly my system has been attacked or exploited by injected arbitrary code?
Thank you so much
UPDATE: 2021-12-18...
Remember to always check for the latest information from the resources listed below
CVE-2021-45105... 2.16.0 and 2.12.2 are no longer valid remediations! The current fixing versions are 2.17.0 (Java 8) and 2.12.3 (Java 7). All other Java versions have to take the stop gap approach (removing/deleting JndiLookup.class file from the log4j-core JAR.
I have updated my message below accordingly.
Answering the question directly:
Reddit thread: log4j_0day_being_exploited has SEVERAL resources that can help you.
To detect vulnerability
cntl + f for Vendor Advisories. Check those lists to see if you are running any of that software. If you are and an update is available for it, update.
THEN cntl + f for .class and .jar recursive hunter. Run the program there, if it finds anything remediate.
You can also cntl + f for Vulnerability Detection if you want to perform a manual active test of your systems for the vulnerability
To detect exploit... this is more complex and all I do is is till you
cntl + f for Vendor Advisories... search through the stuff there... not sure which option will be best for you
More resources
https://www.reddit.com/r/blueteamsec/comments/rd38z9/log4j_0day_being_exploited/
This one has TONS of useful info including detectors, even more resource links, very easy to understand remediation steps, and more
https://www.cisa.gov/uscert/apache-log4j-vulnerability-guidance
https://github.com/cisagov/log4j-affected-db
https://logging.apache.org/log4j/2.x/security.html
Remediation:
CVE-2021-45046 ... CVE-2021-44228 ... CVE-2021-45105
While most people that need to know probably already know enough to do what they need to do, I thought I would still put this just in case...
Follow the guidance in those resources... it may change, but
As of 2021-12-18
It's basically
Remove log4j-core JAR files if possible
From both running machines for immediate fix AND
in your source code / source code management files to prevent future builds / releases / deployments from overwriting the change
If that is not possible (due to a dependency), upgrade them
If you are running Java8, then you can upgrade to log4j 2.17.0+
If you are running an earlier version of Java, then you can upgrade to log4j 2.12.3
If you are running an older version of Java, then you need to upgrade to the newest version of Java, and then use the newest version of Log4J
Again, these changes have to happen both on running machine and in code
If neither of those are possible for some reason... then there is the NON-remediation stop gap of removing the JndiLookup.class file from the log4j-core JARs.
There is a one-liner for the stop gap option on Linux using the zip command that comes packaged with most Linux distros by default.
zip -q -d "$LOG4J_JAR_PATH" org/apache/logging/log4j/core/lookup/JndiLookup.class
At time of writing, most of the guides online for the stop gap option on Windows say to do the following (again... assuming you can't do one of the remove JAR or upgrade options above):
Install something like 7-zip
Locate all of your log4j-core JAR files and for each one do the following...
Rename the JAR to change the extension to .zip
Use 7-zip to unzip the JAR (which now has a .zip extension)
Locate and remove the JndiLookup.class file from the unzipped folder
The path is \\path\\to\\unzippedFolder\\org\\apache\\logging\\log4j\\core\\lookup\\JndiLookup.class
Delete the old JAR file (which now has an extension of .zip)
Use 7-zip to RE-zip the folder
Rename the new .zip folder to change the extension to .jar
There are also some options to use Power Shell
Reddit thread: log4j_0day_being_exploited
ctrl+f for "PowerShell"
This is fine if you only have 1 or 2 JAR files to deal with and you don't mind installing 7-zip or you have PowerShell available to do it. However, if you have lots of JAR files, or if you don't want to install 7-zip and don't have access to Power Shell, I created an open-source VBS script that will do it for you without needing to install any additional software. https://github.com/CrazyKidJack/Windowslog4jClassRemover
Read the README and the Release Notes https://github.com/CrazyKidJack/Windowslog4jClassRemover/releases/latest
You can check on your request log if you see something looking like this :
{jndi:ldap://example.com:1234/callback}
If you want to check if you can be attacked, you can run a POC from Github. This link seems to be the first POC released. You can now find others.
You can also find a black-box testing here.

How to correctly make install of binaries and data after compile in linux?

After make of sources I have compiled executable file and data directory with images for it. What should I do at "make install" phase to correctly install these files to the linux system? And how then application can find installed data (in case when binary and data are placed in different directories)?
Are there any standards for this?
There are many ways to install packages on a Linux and Unix system much like any other operating system. The normal method of installing software is through your distributions package manager. Package managers are different based on the distribution you are using but in general they take a package (a file filled with binaries source code or other files required for the piece of software to work) and place it into the corresponding places as defined by the Filesystem Hierarchy Standard. When you do a make install what you are doing is bypassing the package manager and placing the binaries into the hierarchy standard directly making it nearly impossible for the package manager to handle or account for that programs existence. This is not a good thing for anyone as it is hard to keep a system secure or stable with many unknown files placed throughout the system. Please if you want to install something manually please take a look at the filesystem hierarchy stabdard and place the files under the appropriate folder in either /opt and create a symlink in an area covered by your PATH variable or under /usr/local/

CentOS 5.8 ISO with custom packages

How can I create a CentOS 5.8 .iso image with custom packages? I have to create an iso with only the packages needed for our production system. I already have all the rpms with their dependencies resolved in a folder. I have successfully created a repository from that folder with createrepo.
As I understand it, I should put the rpm files in the CentOS folder, and repodata folder should contain the metadata needed for a package manager. I don't know if I should modify the existing comps.xml file or create a new one, or which structure to use, since this is only a subset of packages contained on a default CentOS installation disk.
I know it is probably futile to delete packages from the default iso, but that is my work order and there's not much to be done there. (There are also some packages not available in the default iso)
Much appreciated
I feel as though your question is a bit vague for a topic with this sort of breadth but I'll do my best to offer an answer. I think you should use Kickstart for this task as it's going to result in a much happier customer whether they're internal or external, and easier management for you going forward when things get updated. Start by reviewing the CentOS documentation, if you are already this far and just asking about removing packages, check out this section of the docs, it talks about specifying your packages and removing the ones you don't want.
If you only have the one style of production machine, then this should take care of it. if you have multiple different configurations I'd suggest taking a look at a configuration management tool such as Ansible, Puppet, or Salt. This would allow you to provide a base image via Kickstart, then build off of that image depending on the needs of whoever is consuming the system.

Distributing source files with an open source app

I have written an open source (GPL) application for Linux and OSX and now wish to distribute it. Is it normal to distribute the source code along with the binaries by default, or just provide a link to where it can be obtained?
If I include the source files, where is the normal location for writing them on the users system for Linux and OSX (I thought /usr/local/src but on my Ubuntu machine, supposedly chock-full of open source apps, this directory is empty).
It is usual to distribute the sources and binaries separately. Binaries would normally be distributed in distro-specific package formats whilst sources would be a simple .tar.gz containing a project folder. The user could unpack it to /usr/local/src if they wanted but it should build anywhere. It's not up to your program to drop its sources in any particular location.
I thought /usr/local/src but on my Ubuntu machine, supposedly chock-full of open source apps, this directory is empty
It will be empty if you are only using the Ubuntu repos. The OS is in charge of /usr and will drop any sources you install into /usr/src. But /usr/local is left for you to play with; that's where you install stuff that the distro doesn't provide.
About /usr/local/src
/usr/local and any subdirectories are always going to be empty on your machine unless YOU have specifically put something in there. It's a section of the filesystem that is reserved for user-installed software for that specific machine. Ubuntu (or any distribution) is not ever supposed to touch it.
Your distro will have separate places for its own source code, if any. Most Ubuntu installations won't need source code anyway (though you can download it if you want to), but if they do it'll go somewhere like /usr/src. But if you want to place your own source code somewhere and don't want your distro to mess with it, then just:
If it's just for developing/compiling in your own user account, you can just put it somewhere in your home directory.
If it's a piece of software you'll installing on the system, /usr/local/src is the suggested spot and your distro won't mess with it there.
FHS is the standard which says where in the filesystems things go, and includes distinctions such as the ones I've discussed above.
Your software should be able to be compiled no matter which directory it's in, because as you can see, it can depend.
It's worth looking at a few projects on Sourceforge (http://www.sf.net). As menioned by #bobince it's normal to distribute binaries and source separately. It's certainly kind to users not to require compilation so they can download and run.

Double-click installer in Ubuntu?

I'm trying to update our installer so a user can simply double-click on a file and have all the dependencies and our software installed easily. This is a suite of applications that will are deployed on a clean UbuntuĀ 8.04 (Hardy Heron) installation. I have investigated making a .deb file, but listing the dependencies doesn't work, because there isn't any Internet access available. And, any script that would set up a local APT repository would still need to be run from the command line. Is there a way to put a .deb file inside of a .deb file?
I know many companies ship shell scripts that you have to chmod +x, and then execute. This is not acceptable. It is ridiculous that this isn't possible; especially considering the distribution and architecture is fixed.
If you are totally confident that it will be installed on the same system every time, you can find the list of package dependencies yourself, fetch them from the Ubuntu repositories, and package them up with your software. You just have to be clear that your software is for a specific version, probably deal with things like keeping up with maintenance releases.
You can also easily install with a script. As for your complaint about making scripts executable, well, I don't know how you're shipping your product, but since you say it's going somewhere without Internet access, I assume it's going to be copied from some kind of media. If you make the script executable when you put it on that media, you're done.
If you'd like to do this using packages, you can create a CD-ROM which contains a package repository. You can find all kinds of information on this with Google Search. For starters, try this - it's a GUI for doing it. http://aptoncd.sourceforge.net/
A makeself self-extracting executable that starts the install script using sudo will work.
The user can either run it from a terminal (after chmod-ing it) or can double-click it and tell it to "Run" from the prompt.
It's possible to put deb-files into deb-files. The only thing you need to do is to configure the appropriate scripts.
A .deb-file consists of:
1x control.tar.gz: contains a file "control" (describes the package) and optional files like "postinst" (script executed right after extraction). There are other files you might include, and Google Search should deliver information about the available scripts.
1x data.tar.gz: contains some structure of root-filesystem which contains files/folders that need to be (re-)placed. Additionally, you may configure the behaviour in the mentioned scripts.
1x debian-binary: as far as I remember, this is simply a version number in a file. I don't know exactly what it means; just remember that in most of the cases this is 2.0.
So you now may put your .deb files in the data-package. Those are extracted by your script... and installed using:
# dpkg -i yourpackage1.deb yourpackage2.deb

Resources