How can I create a CentOS 5.8 .iso image with custom packages? I have to create an iso with only the packages needed for our production system. I already have all the rpms with their dependencies resolved in a folder. I have successfully created a repository from that folder with createrepo.
As I understand it, I should put the rpm files in the CentOS folder, and repodata folder should contain the metadata needed for a package manager. I don't know if I should modify the existing comps.xml file or create a new one, or which structure to use, since this is only a subset of packages contained on a default CentOS installation disk.
I know it is probably futile to delete packages from the default iso, but that is my work order and there's not much to be done there. (There are also some packages not available in the default iso)
Much appreciated
I feel as though your question is a bit vague for a topic with this sort of breadth but I'll do my best to offer an answer. I think you should use Kickstart for this task as it's going to result in a much happier customer whether they're internal or external, and easier management for you going forward when things get updated. Start by reviewing the CentOS documentation, if you are already this far and just asking about removing packages, check out this section of the docs, it talks about specifying your packages and removing the ones you don't want.
If you only have the one style of production machine, then this should take care of it. if you have multiple different configurations I'd suggest taking a look at a configuration management tool such as Ansible, Puppet, or Salt. This would allow you to provide a base image via Kickstart, then build off of that image depending on the needs of whoever is consuming the system.
Related
I am working on learning Hybris. I have successfully install hybris, there are lots and lots of blogs out there that talk about getting the core hybris install with your own custom moduleds to make changes to, such as this one:
http://javainsimpleway.com/hybris-b2b-installation/
In the blog above the gentleman creates a mystore. The question I have is this: Once you have this all setup, you have made changes to the mystore modules and you want get those changes onto a new developers machine (or productions machine), who do you do it?
What I have tried, which does not work is this:
zipped up the bin/custom/mystore, config/local.properties, and localextensions.xml
followed his steps 1 thru 4
unzipped the files on the new machine
jumped down to step 12 where he does an ant clean all initialize
One difference between his process and mine is that I am adding some addon's. It is my impression that all those changes happen with custom/mystore, but to be safe between my steps 3 & 4 I have rerun the ant addoninstall for all four addon's.
The process I have documented, SmartEdit was not working and I found SAP's documentation about running ant npminstall because Hybris does not include npm-related 3rd party JavaScript libraries. This are blowing up when I go to run ant npminstall.
I really feel like I am trying to recreate the wheel here. I would imagine what I am trying to do is very common to any Hybris team, but I cannot find documentation on how to do it. Does anyone know of a blog out there that talks about how to migrate the source from one machine to another?
there are lots and lots of blogs out there that talk about getting the
core hybris install with your own custom moduleds to make changes to
Although they may be helpful, I would suggest you stick to official Hybris documentation (e.g. https://help.sap.com/viewer/4c33bf189ab9409e84e589295c36d96e/1905/en-US/8acc8a5a86691014a20781b3f738213e.html) which is quite rich.
Once you have this all setup, you have made changes to the mystore
modules and you want get those changes onto a new developers machine
(or productions machine), who do you do it?
For production deployment, please go through https://wiki.hybris.com/display/hybrisALF/Ant+Production+for+Continuous+Integration
However, for simply copying the things from one machine to another machine, whatever artefacts you have already copied to the target machine (after you have installed Hybris on the target machine), are correct. If you are working in a team, you typically set up an SCM (e.g. git, SVN etc.) code repository and then it becomes easier.
It is my impression that all those changes happen with custom/mystore
This is a wrong impression. When you run addon install it creates/updates the project.properties file in the addon; not in your custom/mystore. So, if the addon is part of the code repository (which is typically not the case unless it is a custom addon), anyone pulling your code on their machine will automatically get the addon project.properties and therefore they will not require to run addon install on their machines; otherwise, they need to run addon install on their machines. A workaround is to copy the content of the addon project.properties to the local.properties (and thus getting the changes to the target machine when the local.properties is copied to the target machine).
This are blowing up when I go to run ant npminstall.
Make sure to run ant npminstall as an admin user. Please check https://answers.sap.com/questions/12771768/smart-edit-unable-to-find-local-grunt.html for another option.
The need
Recently I've started flirting with the idea of making my own customized Debian live distro. My aim is to have an USB stick with Debian, specific packages, custom scripts and files installed inside. In this way, I can take my OS with everything I need to work with without taking my laptop with me. Furthermore, It will be specially useful in case I just wanted to replicate the OS without the hassle of installing every single packages and further customizations over again.
The research
So I decided to go for it and educate myself on the subject. I've found the Linux from scratch project (LFS), but to be honest, it will take me lots of time I currently cannot afford to invest (But seriously thinking for the future).
I decided to use the live-build project scripts based on the instructions and examples of their manual. http://live.debian.net/manual/3.x/html/live-manual.en.html
The problem
So far, I've built a hybrid.iso image with a custom selection of packages by specifying them in the /config/packages-list/mylist.list.chroot.
Then I tried to copy my custom scripts, files and software inside specific folders under the chroot directory just created,
i.e.
mkdir chroot/etc/skel/<custom dir here>
or
cp <some file or script> chroot/usr/local/bin/
and then run
lb build binary
The problem is that the iso doesn't get built after the first time I run lb build and the customizations done on the chroot directory are deleted every time I try to build it again.
I've tried...
lb clean --binary
lb clean --stage
lb build binary
or
lb build binary iso
So what am I missing? How can I add custom files, folders, scripts to my custom live Debian without downloading every single package over again?
why isn't the iso image built again after the first time I run lb build?
Thanks in advance...
P.D: I decided to be very detailed on the writing so anyone could understand, specially those that want to try the same...
I am conscious about LFS too. But, this
My aim is to have an USB stick with Debian, specific packages, custom
scripts and files installed inside.
and this
it will take me lots of time I currently cannot afford to invest
made me pointing to my answer
I have two suggestions. The easy one, use tools like remastersys or live-magic.
Follow this link.
The difficult one, follow the official documentation to how to creat a custom debian cd.
Debian official doc
This answer comes a year late for the original poster, but for future searchers: don't add files directly to the chroot. Instead, make a folder structure in config/includes.chroot. Then your customizations will be retained when you rebuild the image.
See the section "Live/chroot local includes" in the debian-live manual: http://live.debian.net/manual/4.x/html/live-manual.en.html#506
I have several applications that I wish to deploy using rpm. Some of the files in my application deployments override files from other deployed packages. Simply including the new files in the deployment package will cause rpm conflicts.
I am looking for the proper way to use rpm to update/replace already installed files.
I have already come up with a few solutions but nothing seems quite right.
Maintain custom versions of the rpms containing the original files.
This seems like a large amount of work for a relatively small reward even though it feels less like a hack than some of the other possible solutions.
Include the files in the rpm with another name and copy them over in the post section.
This would work but will mean littering the system with multiple copies of the files. Also it means additional maintenance in the rpm build spec for each file.
Use wget in the post section to replace the original files from some known server.
This is similar to the copy technique but the files wouldn't even live in the rpm. This might act like a nice central configuration authority though.
Deploy the files as new files, then use symlinks to override the originals.
This is also similar to the copy technique but with less clutter. The problem here is that some files don't behave well as symlinks.
To the best of my knowledge, RPM is not designed to permit updating / replacing existing files, so anything that you do is going to be a hack.
Of the options you list, I'd choose #1 as the least bad hack if the target systems are systems that I admin (as you say, it's more work but is the cleanest solution) and a combination of #2 and #4 (symlinks where possible, copies where not) if I'm creating the RPMs for others' systems (to avoid having to distribute a bunch of RPMs, but I'd make it very clear in the docs what I'm doing).
You haven't described which files need to be updated or replaced and how they need to be updated. Depending on the answers to those questions, you may have a couple of other options:
Many programs are designed to use a single default configuration file and also to grab configuration files from a .d subdirectory. For example, Apache uses /etc/httpd/conf/httpd.conf and /etc/httpd/conf.d/*.conf, so your RPMs could drop files under /etc/httpd/conf.d instead of modifying /etc/httpd/conf/httpd.conf. And if the files that you need to modify are config files that don't follow this pattern but could be made to, you can suggest to the package maintainers that they add this capability; this wouldn't help you immediately but would make future releases easier.
For command-line utilities like sendmail and lpr that can be provided by multiple packages, the alternatives system (see man alternatives) permits more than 1 RPM that provides these utilities to be installed side by side. Again, if the files that you need to modify are command-line utilities that don't follow this pattern but could be made to, you can suggest to the package maintainers that they add this capability.
Config file changes on systems that you administer are better managed through a tool like Cfengine or Puppet rather than through custom RPMs. I think that Red Hat favors Puppet.
If I were creating the RPMs for systems I don't administer, I'd consider using a third-party tool like Bitrock and dumping all of my stuff under /opt just so I wouldn't have to stomp on files installed by other admins' RPMs.
Edit (2019): Nowadays, Software Collections offers a useful alternative. You can create packages that install somewhere under /opt, and the Software Collections tools offer a standardized way for users to opt in to using those instead of whatever's normally installed under /usr. Red Hat uses this to distribute newer versions of tools for their otherwise stable and long-lived (i.e., older) Red Hat Enterprise Linux distributions.
You can also execute rpm -U --replacefiles --replacepkgs ..., which will give you what you want.
See here for more info on RPM %files directives:
http://www.rpm.org/max-rpm/s1-rpm-inside-files-list-directives.html
You can use the arguments from the %post and %pre sections in the RPM scriptlets to determine if you are installing, upgrading or removing packages.
If $1 is 0 - then we're removing old stuff. Targeting 0 packages installed.
If $1 is 1 - then we're installing new stuff. Targeting a total of 1 package to be installed.
If $1 is 2 or more - then we're upgrading this package and $1 represents the number of packages already installed.
These sections help with managing files among the versions.
Keep track of what you're doing between versions and consider what one might do if they were to skip a version or two.
Have consideration for these things and you should be good to go!
I'm working with a shell account in a shared system and I would like to use some version control software for my bash scripts and config files.
It must works on Linux with no dependencies, just unpack it in my home dir. I don't need remote checkout, branching or other fancy tricks. I want to be able to commit my changes and restore them if needed.
Try Git.
fossil.
Single binary.
No dependencies.
Version control.
Built in ticket tracker and wiki.
CLI and web interface.
Mercurial. You can just install it in a local directory and make sure that's in your PATH. It gives you a lot of power.
Update for comment:
Most hosting account have way more storage than you'll ever need (e.g. WebFaction gives you 10GB on a $10/mo account), so install Python locally. When you do the build/install simply add --prefix=/home/you/local. It will create local/bin/, local/lib/, etc. Now you have Python and then you can install Mercurial using your very own python.
If your account has little storage, or is missing critical build tools (like gcc, etc.), then you are using the wrong hosting.
I just found Darcs looking at previous questions. It fits perfectly to my needs.
Thanks Adam for your suggestion but Git depends on several packages which versions are not all available to me.
I use Subversion. Works fine for local access.
I also remotely check out my scripts to most of my shell accounts, I must say. It's a really convenient way to make sure the setup of the different accounts stays aligned.
I'm trying to update our installer so a user can simply double-click on a file and have all the dependencies and our software installed easily. This is a suite of applications that will are deployed on a clean UbuntuĀ 8.04 (Hardy Heron) installation. I have investigated making a .deb file, but listing the dependencies doesn't work, because there isn't any Internet access available. And, any script that would set up a local APT repository would still need to be run from the command line. Is there a way to put a .deb file inside of a .deb file?
I know many companies ship shell scripts that you have to chmod +x, and then execute. This is not acceptable. It is ridiculous that this isn't possible; especially considering the distribution and architecture is fixed.
If you are totally confident that it will be installed on the same system every time, you can find the list of package dependencies yourself, fetch them from the Ubuntu repositories, and package them up with your software. You just have to be clear that your software is for a specific version, probably deal with things like keeping up with maintenance releases.
You can also easily install with a script. As for your complaint about making scripts executable, well, I don't know how you're shipping your product, but since you say it's going somewhere without Internet access, I assume it's going to be copied from some kind of media. If you make the script executable when you put it on that media, you're done.
If you'd like to do this using packages, you can create a CD-ROM which contains a package repository. You can find all kinds of information on this with Google Search. For starters, try this - it's a GUI for doing it. http://aptoncd.sourceforge.net/
A makeself self-extracting executable that starts the install script using sudo will work.
The user can either run it from a terminal (after chmod-ing it) or can double-click it and tell it to "Run" from the prompt.
It's possible to put deb-files into deb-files. The only thing you need to do is to configure the appropriate scripts.
A .deb-file consists of:
1x control.tar.gz: contains a file "control" (describes the package) and optional files like "postinst" (script executed right after extraction). There are other files you might include, and Google Search should deliver information about the available scripts.
1x data.tar.gz: contains some structure of root-filesystem which contains files/folders that need to be (re-)placed. Additionally, you may configure the behaviour in the mentioned scripts.
1x debian-binary: as far as I remember, this is simply a version number in a file. I don't know exactly what it means; just remember that in most of the cases this is 2.0.
So you now may put your .deb files in the data-package. Those are extracted by your script... and installed using:
# dpkg -i yourpackage1.deb yourpackage2.deb