Using Jenkins BUILD NUMBER in RPM spec file - linux

Name: My Software
Version: 1.0.5
Release: 1
Summary: This is my software
Not sure if anyone has tried this before or if it is easy, but:
A spec file has two unique indicators for its version:
Version (which specifies software version)
Release (which specifies the package's number - if you build an RPM, it's broken, and build another one, you up the 'Release' number.
I'm wondering if anyone has tried, or knows how, I could use the Jenkins $BUILD_NUMBER variable to dynamically change the Release number, thereby increasing the Release number every time a new successful build completes...?

It's been a long time... and thankfully I have no rpm based systems so I can't test this.
You can pass parameters to rpmbuild on the commandline
rpmbuild --define="version ${env.BUILD_NUMBER}"
It would be helpful to post snippets of the spec and the script you're using to build the rpm.
You don't want your build script editing the spec file, which I'm assuming it's pulling out down from some source control.

I've been using the Jenkins build number as the 'release' and packaging via fpm.
Couple fpm with some globals provided by Jenkins
# $BUILD_ID - The current build id, such as "2005-08-22_23-59-59" (YYYY-MM-DD_hh-mm-ss)
# $BUILD_NUMBER - The current build number, such as "153"
# $BUILD_TAG - String of jenkins-${JOB_NAME}-${BUILD_NUMBER}. Convenient to put into a resource file, a jar file, etc for easier identification.
There's some nebulous variables in the example command below, but $BUILD_NUMBER is what I'm using for the release here (fpm calls it iteration instead).
fpm_out=$(fpm -a all -n $real_pkg_name -v $version -t rpm -s dir --iteration $BUILD_NUMBER ./*)

In my Jenkins setup, I've decided to bypass the build number with regards to the RPM version numbering completely. Instead, I use a home-made script that generates and keeps track of the various releases that are being generated.
In my spec file:
Version: %{_iv_pkg_version}
Release: %{_iv_pkg_release}%{?dist}
And in the Jenkins build script:
# Just initialising some variables, and retrieving the release number.
package="$JOB_NAME"
# We use setuptools, so we can query the package version like so.
# Use other means to suit your needs.
pkg_version="$(python setup.py --version)"
pkg_release="$(rpm-release-number.py "$package" "$pkg_version")"
# Creating the src.rpm (ignore the spec file variables)
rpmbuild --define "_iv_pkg_version $pkg_version" \
--define "_iv_pkg_release $pkg_release" \
-bs "path/to/my/file.spec"
# Use mock to build the package in a clean chroot
mock -r epel-6-x86_64 --define "_iv_pkg_version $pkg_version" \
--define "_iv_pkg_release $pkg_release" \
"path/to/my/file.src.rpm"
rpm-release-number.py is a simple script that maintains a file-based database (in JSON format, for easy maintenance). It can handle being run at the same time, so no worries there, but it won't work if you have build slaves (as far as I can tell, I don't use them so can't test). You can find the source code and documentation here.
The result is that I get the following package versioning scheme:
# Build the same version 3 times
foo-1.1-1
foo-1.1-2
foo-1.1-3
# Increment the version number, and build twice
foo-1.2-1
foo-1.2-2
PS: Note that the Jenkins build script is just an example, the logic behind creating the rpmbuild directory structure and retrieving the .src.rpm and .spec file names is a bit more complicated.

Taking into account that spec file could be 3rd-party I prefer to do pre-build sed-patching of Release field:
sed -i 's/^Release:\(\s*\)\(.*\)$/Release:\1%{?_build_num:%{_build_num}.}%{expand:\2}/g' ./path/to/spec
rpmbuild --define '_build_num $BUILD_NUM' -ba ./path/to/spec
Here %{expand:...} macro is used to handle macro defined release numbers like ones in Mageia specs:
Release: %mkrel 1
Resulting field will be:
Release: %{?_build_num:%{_build_num}.}%{expand:%mkrel 1}
Conditional expansion of _build_num macro makes spec still usable for local build. I.e. if SRPM is also prepared by build system. But it could be reduced to:
sed -i 's/^Release:\(\s*\)\(.*\)$/Release:\1'$BUILD_NUM'.%{expand:\2}/g' ./path/to/spec
rpmbuild -ba ./path/to/spec

Related

boot.scr rebuild in buildroot

Is there a way to rebuild boot.scr script without cleaning entire project?
I removed old boot.scr script and don't know how to genegrate new one (only make clean helps)
Variable BR2_PACKAGE_HOST_UBOOT_TOOLS_BOOT_SCRIPT_SOURCE is set.
make uboot-dirclean uboot-tools-dirclean didn't help.
I found that the mkimage script that creates boot.scr is called from the uboot-tools install rule, but even if I clear uboot-tools boot.scr no longer generates
The accepted answer is correct, but there is easier way. The boot.scr is compiled by host-uboot-tools, not uboot-tools, thus you just need to execute this:
make host-uboot-tools-rebuild
If you dirclean host-uboot-tools it will rebuild your script. The reason is that mkimage (which generates the script) is called in the HOST_UBOOT_TOOLS_INSTALL_CMDS function in the uboot-tools.mk file.
As your personal script is in your external buildroot directory and you will probably want to iterate writing and testing it quickly you will want to make it every time. There is a way to do this each time you run make. No cleaning of anything is required. The post image script is the key.
For example, create your post-image.sh script and specify it in your defconfig file.
BR2_ROOTFS_POST_IMAGE_SCRIPT="$(BR2_EXTERNAL)/board/RK3308/post-image.sh"
In that post-iamge.sh script, run the command to generate your boot script, here is an example :
# Generate the uboot script
$ubootName/tools/mkimage -C none -A arm -T script -d $BR2_EXTERNAL_RK3308_PATH/board/RK3308/boot.cmd $BINARIES_DIR/boot.scr
Each time you run make, the boot.scr will be regenerated.
If you want to see all of this in context, here is an external buildroot repo for the rk3308 chipset.
This is the post-image.sh file.
This is the definition of that file in the defonconfig file.
U-Boot provides the tool mkimage. In Debian based distributions it is in package u-boot-tools. Given that you have a file boot.txt with your script commands you can create boot.scr with
mkimage -T script -n 'My fancy title' -d boot.txt boot.scr

Integrating custom Linux test project package with Alpine

I have downloaded the Linux Test Project repository and compiled it. I now want to integrate it with the Alpine's binary image while compiling. A unix shell script that has Alpine specific commands will kick-in during compile time which adds these packages to the Alpine's binary. All the standard packages (like Python, Nginx and memcached) are getting integrated successfully by this script except LTP. The command used here is apk add <package name>. The same command doesn't work with the custom LTP's binary.
I tried n number of things like upgrading the apk package, supplying the entire LTP repository using --repository option, trying to manually generate an APKBUILD.tar.gz. Nothing works. Any help would be deeply appreciated.
Thanks in advance !

mvn dependency:get -Ddest parameter in Linux

I´m using the mvn dependency:get to download some specific *.jar (latest Release Version) files on Linux Ubuntu.
I am using the following command:
mvn dependency:get -Dartifact=org.apache.httpcomponents:httpmime:RELEASE:jar -DrepoUrl=https://repo1.maven.org/maven2 -Ddest=/home/dev/workspace/"
The artifact is downloaded as expected, but only to the local maven repository and not to copied to the specified Location. I have also tried using
-Ddest=home/dev/workspace/
-Ddest=/home/dev/workspace
But there is no difference in the Result.
I know there is the possibility to use dependency:copy, but therefore i would need a pom.xml (which i don´t want to create only for the copy command).
Thanks for your help
I have given an examples. Please check.
mvn dependency:get -Dartifact=org.apache.maven:maven-core:2.2.1:jar:sources -DremoteRepositories=http://repo1.maven.apache.org/maven2 -Ddest=/home/dev/workspace/something.jar
You can also use dependency:copy as
Artifacts can also be resolved by specifying the classifier and optionally type. Type is only used with the classifier and defaults to java-sources. When the classifier is set, the list of dependencies is used as the base to resolve artifacts with the classifier and type.
For example:
mvn dependency:copy-dependencies -Dclassifier=sources
will try to find the sources for all dependencies and copy them.
For more data command line like:
mvn dependency:copy-dependencies [optional params]
Resource Link:
http://maven.apache.org/plugins/maven-dependency-plugin/usage.html
https://maven.apache.org/plugins/maven-dependency-plugin/copy-dependencies-mojo.html
UPDATE:
use "-Ddest=path/to/my.jar"; otherwise, it is just copied to your local ~/.m2/repository (makes it look like nothing happened).
See http://maven.apache.org/plugins/maven-dependency-plugin/get-mojo.html#destination
example=>
mvn org.apache.maven.plugins:maven-dependency-plugin:2.5.1:get
-DremoteRepositories=repo.maven.apache.org
-Dartifact=org.apache.ant:ant:1.8.1
-Ddest=ant-1.8.1.jar
(result: ant-1.8.1.jar in current directory)

Build debian package without .orig file

I've created packages previously by using a Makefile, the command "dh_make --createorig", then adjusting files in the debian folder generated and finally using the debuild command to generate the .deb. That workflow is simple and works for me, but I was told to adjust it a little in a way that you could build the project from the sources without requiring the orig files and I'm unsure how to do it, but according to this (https://askubuntu.com/questions/17508/how-to-have-debian-packaging-generate-two-packages-given-an-upstream-source-arch) and this structure (http://bazaar.launchpad.net/~andrewsomething/imagination/debian/files) there must be a way. In my case I would have a folder with the sources and all of that and then a debian folder (generated with dh_make) but I'm unsure on how to avoid the debuild command to ask for the .orig files or if I should be using some other command for this.
Sorry for the superlong question, I think I provided all the relevant information, but I can clarify if anything is fuzzy.
The difference is in the version number in the file debian/changelog.
If you use 1.2.3-1 it implied Debian build 1 of an upstream package 1.2.3 --- for which the build programs (dpkg-buildpackage and whichever wrappers on top) --- assume an .orig.tar.gz to exists.
But if you use 1.2.3 it will consider the package 'Debian native' and the archive is just a .tar.gz and not an .orig.tar.gz.
Now the choice should not be driven by your convenience alone. If this has an upstream source, use the first scheme. If not, the second can be fine. In the packages I maintain I have both but way more of the former.
If you want to create a Debian directory directly in the source package (ie you're packaging your own work, rather than from an upstream release) you could use the --native option to dh_make
I think the question was asked differently, it was somewhat clear that the project was upstream and it's probably not a good reason to change its format to native.
Currently I package some upstream python project, this exact same question came to my mind. Why isn't there any dh_* hook to overwrite in order to generate this origin tarball on the fly so you do not get bothered by:
This package has a Debian revision number but there does not seem to be
an appropriate original tar file or .orig directory in the parent directory;
for a start, I added a makefile to the project:
# Makefile
VERSION:=$(shell dpkg-parsechangelog -S Version | sed -rne 's,([^-\+]+)+(\+dfsg)*.*,\1,p'i)
UPSTREAM_PACKAGE:=click_${VERSION}.orig.tar.gz
dpkg:
tar cafv ../${UPSTREAM_PACKAGE} . --exclude debian --exclude .git
debuild -uc -us
clean:
rm -f ../${UPSTREAM_PACKAGE}
debuild clean
so a simple make clean dpkg was all it needed to build the package.
Now I think the question remains if someone has some bright idea how to insert the tar operation within the debian/rules so I could just call debuild -uc -us and it magically creates the orig tarball I would be awsome :)

Why the bash I compiled(3.8M) is bigger the one my system provided(937K)?

I have download the bash shell version 4.2 form here: http://ftp.gnu.org/gnu/bash/, and then compiled it by my self using the following command:
./configure
make
But the bash I compiled is much bigger then one system provided:
This is my bash:
$ ls -lh bash
-rwxrwxr-x 1 unimous unimous 3.8M Oct 31 23:57 bash
This is the system's bash:
$ ls -hl /bin/bash
-rwxr-xr-x 1 root root 937K Sep 19 21:40 /bin/bash
The sizes would match more closely if you ran strip(1). As Tim Post points out, the production make targets for bash do not strip, as they assume users want debugging symbols. However distribution targets do strip.
Assuming you have downloaded the source for bash that directly matches the version of your system, there are several things that could be different:
Bash has a lot of build configuration options because it's one of the most portable pieces of software in existence. The default ./configure will select options that most people would want that will work on the system you're using to build it.
The production install targets for bash do not strip the installed executable. Most people that download the source for bash and build it themselves are probably doing so because they want different behavior than their OS provides, or a different version altogether. In either case, people would want debug symbols left until stability has been established.
'Special sauce' added by the distribution
To directly reproduce the build, you'd need to get the source to the package your distro provided, and use the build tools (or, just grab the build config options out of them and apply them to the version you downloaded), then strip the resulting executable.
Keep in mind, distros are absolutely free to apply their own patches (or 'sauce') to the things they package. You have to check for this as well, and be sure to apply the same patches to bash that your distro did.
In short, it's easier to just grab the source package from your distro if all you want to do is reproduce the build, unless you want to use the official release version that GNU provides.
It's impossible to know exactly why without knowing how both you and the package maintainer compiled bash.
It's possible you statically linked certain libraries that are dynamically linked in the system provided package. Perhaps they compiled with fewer features enabled. Maybe they used different compile flags, if you have some debug symbols included, but the package has been fully stripped, that would also account for some difference. Different optimization levels would also have an effect on size.

Resources