I've been really stuck on this minor (I'm sure) issue so any help would be greatly appreciated. I've created a standard ubuntu package with dh_make. The purpose of this package is to create a package that will set up all the ldap related packages that a system needs including it's configuration. One of the steps I'm trying to do is to copy over an /etc/ldap.conf file while making a backup of the existing file. How do I do this? I tried to create a postinst script that looks essentially like the following, but I'm not clear on how the package stores the files and I get an error saying missing etc/ldap.conf file. What's the best way to do this? Here is my postinst script:
#!/bin/bash -xv
install -v -b etc/ldap.conf /etc/ldap.conf > /tmp/tst 2>&1
Here is my skeleton structure:
root#hqd-clientb-16:~/navldapubuntu-0.1/debian# tree
├── changelog
├── compat
├── control
├── copyright
├── docs
├── etc
└── ldap.conf
├── install
├── postinst
├── README.Debian
├── README.source
├── rules
├── source
└── format
├── navldapubuntu
└── etc
├── navldapubuntu.debhelper.log
├── navldapubuntu.dirs
└── navldapubuntu.doc-base.EX
Here's some additional information of the package I created.
dpkg --contents tnoldapubuntu_0.1-1_all.deb (truncated output)
./usr/
./usr/share/
./usr/share/doc
./usr/share/doc/navldapubuntu/
./usr/share/doc/navldapubuntu/copyright
./usr/share/doc/navldapubuntu/README.Debian
./usr/share/doc/navldapubuntu/changelog.Debian.gz
./etc/ldap.conf
There is a special tool that designed for creation of configuration packages: http://debathena.mit.edu/config-packages
Here is a simple template that could be helpful for a quick start.
List of files
template (directory)
template/debian (directory)
template/debian/control
template/debian/changelog
template/debian/displace
template/debian/rules
template/debian/postinst
template/debian/install
template/debian/docs
template/debian/compat
template/README
template/BUILD
template/files (directory)
template/files/etc/ldap.conf.mycompanyname
Content
template/debian/control:
Source: PACKAGE_NAME
Section: morpho/misc
Priority: optional
Maintainer: MAINTAINER
Build-Depends: debhelper, config-package-dev (>= 5.0~)
Package: PACKAGE_NAME
Architecture: all
Depends: ${misc:Depends}, DEPENDENCY [, DEPENDENCY ...]
Provides: ${diverted-files}
Conflicts: ${diverted-files}
Description: PACKAGE_DESCRIPTION_SHORT
PACKAGE_DESCRIPTION_LONG.
template/debian/displace
/etc/ldap/ldap.conf.mycompanyname
template/debian/install
files/* /
template/debian/postinst
#!/bin/sh
set -e
#DEBHELPER#
POSTINST_SCRIPT
template/debian/rules
#!/usr/bin/make -f
# Exclude *.svn* from building
# you probably don't need this if don't use SVN
export DH_ALWAYS_EXCLUDE=.svn
# Core (check http://debathena.mit.edu/config-packages for more information)
%:
dh $# --with=config-package
# Prevent dh_installdeb of treating files in /etc as configuration files
# you need this if need configuration files been always rewritten
# even if changed
override_dh_installdeb:
dh_installdeb
rm debian/*/DEBIAN/conffiles
template/debian/docs
README
BUILD
And finally you can build this package with the following command:
dpkg-buildpackage -us -uc -I.svn
You need to create a "conffiles" file in the DEBIAN directory, next to the "control" file, and declare /etc/ldap.conf in it. So this file will be automatically considered a configuration file, and changes to it will prompt a "new config file, would you want to overwrite, yadda yadda".
Related
Trying to add the Debian package libsystemd. But I keep get the following error after, not sure how to solve this.
ERROR: Nothing PROVIDES 'libsystemd' (but example.bb DEPENDS on or otherwise requires it). Close matches:
libteam
systemd
systemd RPROVIDES libsystemd
NOTE: Runtime target 'example' is unbuildable, removing...
Missing or unbuildable dependency chain was: ['example', 'libsystemd']
Missing or unbuildable dependency chain was: [ 'example', 'libsystemd']
fatal error: systemd/sd-daemon.h: No such file or directory | 16 | #include <systemd/sd-daemon.h>
Related post: Yocto Build Dependency on Debian Package
example.bb
DESCRIPTION = "Example Utilities"
LICENSE = "CLOSED"
inherit cmake systemd useradd
require common.inc
S = "${WORKDIR}/git/example-server"
DEPENDS = "simple-web-server boost sqlite3 libsystemd"
systemd/sd-daemon.h is provided by systemd recipe:
In image (${D}) folder there is:
$ tree usr/include/
usr/include/
├── libudev.h
└── systemd
├── sd-bus.h
├── sd-bus-protocol.h
├── sd-bus-vtable.h
├── _sd-common.h
├── sd-daemon.h <======(Header file you are looking for)
├── sd-device.h
├── sd-event.h
├── sd-hwdb.h
├── sd-id128.h
├── sd-journal.h
├── sd-login.h
└── sd-messages.h
Also, libsystemd is provided by systemd package, so change DEPENDS to:
DEPENDS = "simple-web-server boost sqlite3 systemd"
# ^
# |
# ===========================================
I currently have a c++ project which depend on some external shared objects (.so). My current directory looks like this:
├── src
│ └── .cpp files
├── include
│ ├── glad
│ │ └── .h files
│ └── fmod
│ ├── core
│ │ └── .h files
│ └── studio
│ └── .h files
├── lib
│ └── fmod
│ ├── core
│ │ └── .so files
│ └── studio
│ └── .so files
├── Makefile.am
├── configure.ac
I want to compile this project while simultaneously copying those .so files towards /usr/lib or /usr/local/lib, however I can't seem to be able to do this!
The following is my configure.ac file
AC_INIT([autoGL], 1.0)
AM_INIT_AUTOMAKE
AC_PROG_CC
AC_PROG_CXX
AC_CONFIG_FILES(Makefile)
AC_OUTPUT
And my Makefile.am
bin_PROGRAMS = autogl
autogl_SOURCES = src/Source.cpp
autogl_SOURCES+= src/glad.c
autogl_LDADD = -lglfw -ldl
autogl_LDADD+= -L lib/fmod/core -lfmod
autogl_LDADD+= -L lib/fmod/studio -lfmodstudio
autogl_LDFLAGS = -Wl,--no-as-needed,-rpath,lib/fmod/core,-rpath,lib/fmod/studio
autogl_CPPFLAGS = -I include
autogl_CPPFLAGS+= -I include/fmod/core
autogl_CPPFLAGS+= -I include/fmod/studio
autogl_CPPFLAGS+= -I include/fmod/fsbank
You can see that I'm linking every library using the link flags -L lib/fmod/---- -library. Initially, the seventh line of my Makefile.am was only
autogl_LDFLAGS = -Wl,--no-as-needed
resulting in the following g++ code, which was successfull and gave me an executable file
g++ -g -O2 -Wl,--no-as-needed -o autogl autogl-Source.o autogl-glad.o -lglfw -ldl -L lib/fmod/core -lfmod -L lib/fmod/studio/ -lfmodstudio
However, when tried to run this, I would get the following error:
./autogl: error while loading shared libraries: libfmod.so.12: cannot open shared object file: No such file or directory
My shared objects are not being copied to /usr/lib or /usr/local/lib.
With the addition of
autogl_LDFLAGS = -Wl,--no-as-needed,-rpath,lib/fmod/core,-rpath,lib/fmod/studio
since we are linking rpath to our lib files, the program has no problem running. However, if I run make install, the rpath being linked would be /usr/bin/lib/fmod/core and /usr/bin/lib/fmod/studio, which clearly don't have the needed files. My .so files are still not being copied anywhere. I want to copy my .so files directly to /usr/local/lib so that my program can run without me having to link it directly.
How can I force automake to copy these .so files directly to a folder of my choice? (preferable /usr/local/lib).
Found a solution!
Autotools also offers the possibility to transfer data. I added the following to my Makefile.am
flashdir=$(prefix)/lib
flash_DATA= lib/fmod/core/libfmodL.so \
lib/fmod/core/libfmodL.so.12 \
lib/fmod/core/libfmodL.so.12.10
.....
This adds all of my .so files to $(prefix)/lib, which usually is /usr/local/lib.
However, there is a problem, particularly in Ubuntu, where /usr/local/lib is not by default on /etc/ld.so.conf.d, so libraries on /usr/local/lib are not used.
To solve this I added the following line to my makefile.am
install-data-hook:
ldconfig $(prefix)/lib
This creates a hook which runs AFTER the lib files are already added to $(prefix)/lib, which, when run, adds the folder to /etc/ld.so.conf.d, so now after make install, everything runs smoothly.
I have two projects in my user directory ~, the project A and B.
I run stack init and later stack build on the project A. Then, I have
the binaries of the A package in a folder ~/.stack-work/install/x86_64-linux/lts-6.0/7.10.3/bin. The issue is B needs this version of the binaries from A package, and then try the same build with stack on the B project directory. I tried on ~/B run the following command without success.
stack build ~/.stack-work/install/x86_64-linux/lts-6.0/7.10.3/bin
How can I do that? What if I create a third package C, and need something similar?
Excerpts:
The A.cabal content.
name: A
version: 1.1
And the B.cabal.
name: B
version: 1.0
build-depends: A>= 1.1
Then,
$ stack init
Looking for .cabal or package.yaml files to use to init the project.
Using cabal packages:
- B.cabal
Selecting the best among 8 snapshots...
* Partially matches lts-6.0
A version 1.0 found
- A requires ==1.1
This may be resolved by:
- Using '--omit-packages to exclude mismatching package(s).
- Using '--resolver' to specify a matching snapshot/resolver
But I actually have the version 1.1 of A build.
You don't need to include the project A's bin directory - that was a red herring.
Organize your files like this:
.
├── stack.yaml
├── project-A
│ ├── LICENSE.txt
│ ├── Setup.hs
│ ├── project-A.cabal
│ └── src
│ └── ...
│
└── project-B
├── Setup.hs
├── project-B.cabal
└── src
└── ...
Your top-level stack.yaml file will look like:
resolver: lts-5.13
packages:
- project-A/
- project-B/
Then in the top-level directory run stack build.
I'll take a stab at answering your question...
How about putting
~/.stack-work/install/x86_64-linux/lts-6.0/7.10.3/bin
in your PATH? If the other project really needs binaries (i.e. programs) built by another project, this would be the way to do it.
Or, copy the built programs to some directory in your current PATH - i.e. /usr/local/bin or ~/bin.
If this doesn't answer your question, please post the cabal files for both projects.
I found an answer after digging into the FAQ of stack. Create a file stack.yaml into B folder. At first the content could be:
resolver: lts-6.0
packages:
- '.'
- '/home/jonaprieto/A'
extra-deps: []
Then, it runs:
$ stack build
I have a Python application that comes with a setup.py script and can be installed via Pip or setuptools. However, I'm finding some annoying differences between the two methods and I want to know the correct way of distributing data-files.
import glob
import setuptools
long_description = ''
setuptools.setup(
name='creator-build',
version='0.0.3-dev',
description='Meta Build System for Ninja',
long_description=long_description,
author='Niklas Rosenstein',
author_email='rosensteinniklas#gmail.com',
url='https://github.com/creator-build/creator',
py_modules=['creator'],
packages=setuptools.find_packages('.'),
package_dir={'': '.'},
data_files=[
('creator', glob.glob('creator/builtins/*.crunit')),
],
scripts=['scripts/creator'],
classifiers=[
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Intended Audience :: Developers",
"Topic :: Utilities",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
],
license="MIT",
)
Using Pip, the files specified in data_files end up in sys.prefix + '/creator'.
Using setuptools (that is, running setup.py directly), the files end up in lib/python3.4/site-packages/creator_build-0.0.3.dev0-py3.4.egg/creator.
Ideally, I would like the files to always end up in the same location, independent from the installation method. I would also prefer the files to be put into the module directory (the way setuptools does it), but that could lead to problems if the package is installed as a zipped Python Egg.
How can I make sure the data_files end up in the same location with both installation methods? Also, how would I know if my module was installed as a zipped Python Egg and how can I load the data files then?
I've been asking around and the general consensus including the official docs is that:
Warning data_files is deprecated. It does not work with wheels, so it should be avoided.
Instead, everyone appears to be pointing towards include_package_data instead.
There's a drawback here in that it doesn't allow for including things outside of your src root. Which means, if creator is outside creator-build, it won't include it. Even package_data will have this limitation.
The only workaround, if your data files are outside of your source files (for instance, I'm trying to include examples/*.py for a lot of reasons we don't need to discuss), you can hot-swap them in, do the setup and then remove them.
import setuptools, glob, shutil
with open("README.md", "r") as fh:
long_description = fh.read()
shutil.copytree('examples', 'archinstall/examples')
setuptools.setup(
name="archinstall",
version="2.0.3rc4",
author="Anton Hvornum",
author_email="anton#hvornum.se",
description="Arch Linux installer - guided, templates etc.",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/Torxed/archinstall",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3.8",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: POSIX :: Linux",
],
python_requires='>=3.8',
package_data={'archinstall': glob.glob('examples/*.py')},
)
shutil.rmtree('archinstall/examples')
This is at best ugly, but works.
My folder structure for reference is (in the git repo):
.
├── archinstall
│ ├── __init__.py
│ ├── lib
│ │ ├── disk.py
│ │ └── exceptions.py
│ └── __main__.py
├── docs
│ ├── logo.png
├── examples
│ ├── guided.py
│ └── minimal.py
├── LICENSE
├── profiles
│ ├── applications
│ │ ├── awesome.json
│ │ ├── gnome.json
│ │ ├── kde.json
│ │ └── postgresql.json
│ ├── desktop.py
│ ├── router.json
│ ├── webserver.json
│ └── workstation.json
├── README.md
└── setup.py
And this is the only way I can see how to include for instance my profiles as well as examples without moving them outside of the root of the repository (which I'd prefer not to do, as I want users to easily find them when navigating to the repo on github).
And one final note. If you don't mind polluting the src directory, in my case that's just archinstall. You could symlink in whatever you need to include instead of copying it.
cd archinstall
ln -s ../examples ./examples
ln -s ../profiles ./profiles
That way, when setup.py or pip installs it, they'll end up in the <package dir> as it's root.
This question already has an answer here:
Closed 11 years ago.
Possible Duplicate:
Creating makefile
Hello, I am trying to create the makefiles and configure for my library which its directory structure is like following:
$projectroot
├── part1
│ ├── src
│ └── lib
├── part2
│ ├── src
│ └── lib
└── part3
├── src
└── lib
As you can see, this project has 3 different parts, Someone would want to install the entire project, or someone might need only one library from project.
I have a Makefile like the following:
SUBDIRS = part1/lib part1/src part2/lib part2/src part3/lib part3/src
part1:
cd part1/lib; make
cd part1/src; make
part2:
cd part2/lib; make
cd part2/src; make
part3:
cd part3/lib; make
cd part3/src; make
The problem comes around when I use
$ make part1 install
that installs the whole project but I want just to install part1, not all the parts
How can I do that?
Your question is really difficult to parse, but I have a feeling that you need an additional set of install targets for each part:
part1_install: part1
cd part1/lib; make install
cd part1/src; make install
Then you can just execute make part1_install (part1 will be built implicitly).