This question already has an answer here:
Closed 11 years ago.
Possible Duplicate:
Creating makefile
Hello, I am trying to create the makefiles and configure for my library which its directory structure is like following:
$projectroot
├── part1
│ ├── src
│ └── lib
├── part2
│ ├── src
│ └── lib
└── part3
├── src
└── lib
As you can see, this project has 3 different parts, Someone would want to install the entire project, or someone might need only one library from project.
I have a Makefile like the following:
SUBDIRS = part1/lib part1/src part2/lib part2/src part3/lib part3/src
part1:
cd part1/lib; make
cd part1/src; make
part2:
cd part2/lib; make
cd part2/src; make
part3:
cd part3/lib; make
cd part3/src; make
The problem comes around when I use
$ make part1 install
that installs the whole project but I want just to install part1, not all the parts
How can I do that?
Your question is really difficult to parse, but I have a feeling that you need an additional set of install targets for each part:
part1_install: part1
cd part1/lib; make install
cd part1/src; make install
Then you can just execute make part1_install (part1 will be built implicitly).
Related
This question already has an answer here:
What is the difference between library crates and normal crates in Rust?
(1 answer)
Closed 2 years ago.
What's the difference between binary and library in Rust?
I read The Cargo Book, but couldn't understand it well.
I generated two folders using cargo new a --bin and cargo new b --lib, however, both of them look the same inside. What are the purposes of --bin and --lib? And what are the difference between them?
A binary crate should generate an executable (or multiple) that can be installed in the user's path and can be executed as usual.
The purpose of a library crate on the other hand is not to create executables but rather provide functionality for other crates to depend on and use.
Also they do differ in their structure:
✦2 at [22:50:27] ➜ cargo new --bin somebinary
✦2 at [22:50:29] ➜ cargo new --lib somelib
Created library `somelib` package
✦2 at [22:50:34] ➜ tree somebinary/
somebinary/
├── Cargo.toml
└── src
└── main.rs
1 directory, 2 files
✦2 at [22:50:41] ➜ tree somelib/
somelib/
├── Cargo.toml
└── src
└── lib.rs
You can also find more information in this rust-lang forum thread: https://users.rust-lang.org/t/what-is-the-difference-between-cargo-new-lib-and-cargo-new-bin/19009
One creates an src/main.rs and other creates src/lib.rs. They are different in the nature of the files which are created. Differences lies in whether you are interested in creating a library or interested in creating a binary
Are you sure you ran those exact same commands?
(ins)temp->tree
.
├── a
│ ├── Cargo.toml
│ └── src
│ └── main.rs
└── b
├── Cargo.toml
└── src
└── lib.rs
I have a barebones workspace project:
.
├── build-debug.sh
├── Cargo.lock
├── Cargo.toml
├── common
│ ├── Cargo.toml
│ └── src
│ └── lib.rs
├── rs-test.iml
├── server
│ ├── Cargo.toml
│ └── src
│ └── main.rs
└── wui
├── Cargo.toml
└── src
└── lib.rs
The rs files either empty or just an empty main function.
The server and the wui depends on common: common = { path = "../common" }.
The common project has one crates.io dependency with I suppose build script or proc macro dependency.
The build script:
cargo build -p wui --target wasm32-unknown-unknown
cargo build -p server
The problem:
When I rebuild the unchanged project, some wui dependencies are getting invalidated/rebuilt, then the same for server.
Either:
remove the wasm32 target flag
replace the dependency with a simple crate without build time compiled dependencies
It does not rebuild the subprojects anymore.
Is this a cargo bug? What can I do?
It's probably not a cargo bug. What is likely happening here is that your crates.io dependency (you don't mention what it is, which might have been useful) has different dependencies or features depending on the target architecture. Thus, as you alternate between building the WASM target and your host target, stuff is being rebuilt.
Perhaps it would be better in this case to stop using the Cargo workspace and build the server and wui separately; this way you'll have separate target directories for the server and wui, which takes some extra disk space and takes longer for non-incremental compilation, but will prevent you from having to rebuild that stuff all the time as you build both.
meta/recipes-core/initrdscripts/files/init-install-efi.sh is used for formatting and creating partitions.
I have modified this file to create one more partition for software update.
Can I copy the newly updated script file in my own custom layer recipes-core/initrdscripts/files/init-install-efi.sh.
Will it update the init-install-efi.sh. If not how to achieve this, I don't want to touch the poky source code, as that is fetched using repo utility
$ tree meta-ncr/
meta-ncr/
├── conf
│ ├── bblayers.conf
│ ├── layer.conf
│ └── machine
│ └── panther2.conf
├── recipes-core
│ └── initrdscripts
│ ├── files
│ │ └── init-install-efi.sh
│ └── initramfs-live-install-efi_1.0.bbappend
└── scripts
└── setup-environment
$ cat meta-ncr/recipes-core/initrdscripts/initramfs-live-install-efi_1.0.bbappend
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI = "file://init-install-efi.sh"
After debugging, I found that it is copying the script present in the meta-intel layer and not of my layer.
This is from the output of bitbake-layers show-appends
initramfs-live-install-efi_1.0.bb:
/home/jamal/repo_test/sources/meta-intel/recipes-core/initrdscripts/initramfs-live-install-efi_%.bbappend
/home/jamal/repo_test/sources/meta-ncr/recipes-core/initrdscripts/initramfs-live-install-efi_1.0.bbappend
Can you please tell me what changes are required for my bbappend to work instead of meta-intel
Yocto provides bbappend mechanism to archive Your case without touching metadata from poky, please follow these few steps to archive this:
create new layer or use Your existing one,
in this layer create bbappend file for initramfs-module-install-efi_1.0.bb or initramfs-live-install-efi_1.0.bb (I found that this recipes are based on this script), with content:
$ cat meta-test/recipes-core/initrdscripts/initramfs-live-install-efi_1.0.bbappend
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI = "file://init-install-efi.sh"
move modified script file under files directory, Your meta layer structure should look like:
$ tree meta-test/
meta-test/
├── conf
│ └── layer.conf
├── COPYING.MIT
├── README
└── recipes-core
└── initrdscripts
├── files
│ └── init-install-efi.sh
└── initramfs-live-install-efi_1.0.bbappend
4 directories, 5 files
Then finally after running do_unpack task on initramfs-live-install-efi recipe in working directory You will find Your modified file in recipe workspace,
$ bitbake -c unpack initramfs-live-install-efi
Test:
$ cat tmp/work/i586-poky-linux/initramfs-live-install-efi/1.0-r1/init-install-efi.sh
#!/bin/bash
echo "hello"
FILESEXTRAPATHS - is used to extend search path for do_fetch and do_patch tasks.
I have two projects in my user directory ~, the project A and B.
I run stack init and later stack build on the project A. Then, I have
the binaries of the A package in a folder ~/.stack-work/install/x86_64-linux/lts-6.0/7.10.3/bin. The issue is B needs this version of the binaries from A package, and then try the same build with stack on the B project directory. I tried on ~/B run the following command without success.
stack build ~/.stack-work/install/x86_64-linux/lts-6.0/7.10.3/bin
How can I do that? What if I create a third package C, and need something similar?
Excerpts:
The A.cabal content.
name: A
version: 1.1
And the B.cabal.
name: B
version: 1.0
build-depends: A>= 1.1
Then,
$ stack init
Looking for .cabal or package.yaml files to use to init the project.
Using cabal packages:
- B.cabal
Selecting the best among 8 snapshots...
* Partially matches lts-6.0
A version 1.0 found
- A requires ==1.1
This may be resolved by:
- Using '--omit-packages to exclude mismatching package(s).
- Using '--resolver' to specify a matching snapshot/resolver
But I actually have the version 1.1 of A build.
You don't need to include the project A's bin directory - that was a red herring.
Organize your files like this:
.
├── stack.yaml
├── project-A
│ ├── LICENSE.txt
│ ├── Setup.hs
│ ├── project-A.cabal
│ └── src
│ └── ...
│
└── project-B
├── Setup.hs
├── project-B.cabal
└── src
└── ...
Your top-level stack.yaml file will look like:
resolver: lts-5.13
packages:
- project-A/
- project-B/
Then in the top-level directory run stack build.
I'll take a stab at answering your question...
How about putting
~/.stack-work/install/x86_64-linux/lts-6.0/7.10.3/bin
in your PATH? If the other project really needs binaries (i.e. programs) built by another project, this would be the way to do it.
Or, copy the built programs to some directory in your current PATH - i.e. /usr/local/bin or ~/bin.
If this doesn't answer your question, please post the cabal files for both projects.
I found an answer after digging into the FAQ of stack. Create a file stack.yaml into B folder. At first the content could be:
resolver: lts-6.0
packages:
- '.'
- '/home/jonaprieto/A'
extra-deps: []
Then, it runs:
$ stack build
I have a Python application that comes with a setup.py script and can be installed via Pip or setuptools. However, I'm finding some annoying differences between the two methods and I want to know the correct way of distributing data-files.
import glob
import setuptools
long_description = ''
setuptools.setup(
name='creator-build',
version='0.0.3-dev',
description='Meta Build System for Ninja',
long_description=long_description,
author='Niklas Rosenstein',
author_email='rosensteinniklas#gmail.com',
url='https://github.com/creator-build/creator',
py_modules=['creator'],
packages=setuptools.find_packages('.'),
package_dir={'': '.'},
data_files=[
('creator', glob.glob('creator/builtins/*.crunit')),
],
scripts=['scripts/creator'],
classifiers=[
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Intended Audience :: Developers",
"Topic :: Utilities",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
],
license="MIT",
)
Using Pip, the files specified in data_files end up in sys.prefix + '/creator'.
Using setuptools (that is, running setup.py directly), the files end up in lib/python3.4/site-packages/creator_build-0.0.3.dev0-py3.4.egg/creator.
Ideally, I would like the files to always end up in the same location, independent from the installation method. I would also prefer the files to be put into the module directory (the way setuptools does it), but that could lead to problems if the package is installed as a zipped Python Egg.
How can I make sure the data_files end up in the same location with both installation methods? Also, how would I know if my module was installed as a zipped Python Egg and how can I load the data files then?
I've been asking around and the general consensus including the official docs is that:
Warning data_files is deprecated. It does not work with wheels, so it should be avoided.
Instead, everyone appears to be pointing towards include_package_data instead.
There's a drawback here in that it doesn't allow for including things outside of your src root. Which means, if creator is outside creator-build, it won't include it. Even package_data will have this limitation.
The only workaround, if your data files are outside of your source files (for instance, I'm trying to include examples/*.py for a lot of reasons we don't need to discuss), you can hot-swap them in, do the setup and then remove them.
import setuptools, glob, shutil
with open("README.md", "r") as fh:
long_description = fh.read()
shutil.copytree('examples', 'archinstall/examples')
setuptools.setup(
name="archinstall",
version="2.0.3rc4",
author="Anton Hvornum",
author_email="anton#hvornum.se",
description="Arch Linux installer - guided, templates etc.",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/Torxed/archinstall",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3.8",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: POSIX :: Linux",
],
python_requires='>=3.8',
package_data={'archinstall': glob.glob('examples/*.py')},
)
shutil.rmtree('archinstall/examples')
This is at best ugly, but works.
My folder structure for reference is (in the git repo):
.
├── archinstall
│ ├── __init__.py
│ ├── lib
│ │ ├── disk.py
│ │ └── exceptions.py
│ └── __main__.py
├── docs
│ ├── logo.png
├── examples
│ ├── guided.py
│ └── minimal.py
├── LICENSE
├── profiles
│ ├── applications
│ │ ├── awesome.json
│ │ ├── gnome.json
│ │ ├── kde.json
│ │ └── postgresql.json
│ ├── desktop.py
│ ├── router.json
│ ├── webserver.json
│ └── workstation.json
├── README.md
└── setup.py
And this is the only way I can see how to include for instance my profiles as well as examples without moving them outside of the root of the repository (which I'd prefer not to do, as I want users to easily find them when navigating to the repo on github).
And one final note. If you don't mind polluting the src directory, in my case that's just archinstall. You could symlink in whatever you need to include instead of copying it.
cd archinstall
ln -s ../examples ./examples
ln -s ../profiles ./profiles
That way, when setup.py or pip installs it, they'll end up in the <package dir> as it's root.