I currently have a bitbake .bb script that looks like this
DESCRIPTION = "Hello World"
SECTION = "TESTING"
LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
PR = "r0"
SRC_URI = "file://fileA \
file://fileB"
S = "${WORKDIR}"
inherit allarch
do_install() {
echo "--------HELLO WORLD------------------------"
}
Now when I goto the build directory and run bitbake on this recipe I do not see output "Hello world" anywhere. Any suggestions on why I dont see that ?
you could use bitbake -e myRecipe > ./myRecipe.log to look deep into what is going on. The do_install will not echo anything out of the build when you are running bitbake.
Instead, they are all stored in the log file at /build/${TMPDIR}/work/${MULTIMACH_TARGET_SYS}/${PN}/${EXTENDPE}${PV}-${PR}/temp
In log.do_install, you should able to see something like this
DEBUG: Executing shell function do_install
--------HELLO WORLD------------------------
DEBUG: Shell function do_install finished
you can do it like below (full source)
do_install() {
bbplain "--------HELLO WORLD------------------------"
printf "%b\0" "bbplain --------HELLO WORLD------------------------" > ${LOGFIFO}
}
For faster (and somewhat noisy) debugging you could also use bbnote/bbwarn in shell tasks. For python tasks there is bb.note/bb.warn.
Look here: http://patchwork.openembedded.org/patch/59021/
More readability with regard to which tasks have executed comes from piping bitbake through something, so it knows not to use fancy screen updates:
bitbake $recipe | cat
This gives you a nice sequential stream of tasks with bbnote/bbwarn in between.
Related
I am new in yocto project and now I really need to do this simple task.
I have the part of .bb recipe file:
S = "${WORKDIR}"
HELLO = "hello"
HELLO = "hell"
SRC_URI = "file://myserver.tar.gz"
do_compile() {
make
}
And now I want to track the values of my variable, for example, HELLO.
I am introduced with command bitbake -e, but it seems like it shows my only the last modification of this variable, but I need to see "hello" value and "hell" value in debug information
Thanks !
I am using Ubuntu 18.04, if for some reason it helps
You could use logging class:
inherit logging
bb.debug("HELLO = %s" % HELLO)
If I have this task in a nimble file:
task readme, "generate README.md":
exec "nim c -r readme.nim > README.md"
with this readme.nim:
echo "# Hello\nworld"
executing task with nimble (nimble readme) does not redirect the output of readme.nim to file.
As expected running nim c -r readme.nim > README.md from terminal correctly creates/updates README.md.
Is this intended behaviour of nimble? is there a workaround?
note: the above was tested on windows.
thanks to answer by #xbello and ensuing discussion, I found a good workaround for my use case:
task readme, "generate README.md":
exec "nim c readme.nim"
"README.md".writeFile(staticExec("readme"))
the explanation to why the simple exec has to do with the fact that nimble uses nimscript.exec which internally uses rawExec which is a builtin that (judging from different behaviours reported here for windows and linux) is not entirely cross-platform when it regards output pipeline.
I end up with the expected README.md:
$ cat README.md
# Hello
world
But sometimes (the readme.nim has to be compiled or recompiled) I end up with something like this:
CC: readme.nim
# Hello
world
That is, the full stdout (not the stderr) of the nim c -r readme.nim command, as expected. As a workaround you could encapsulate what you want to do in the readme.nim:
import os
let f: File = open(commandLineParams()[0], fmWrite)
f.write "# Hello\nworld"
f.close()
And in your nimble file:
task readme, "generate README.md":
exec "nim c -r readme.nim README.md"
Another workaround could be to suppress the output of nim c:
task readme, "generate README.md":
exec "nim c --verbosity:0 -r readme.nim > README.md"
I am using Gradle for AOSP, I would like to check if a command exists in my build environment.
task printCommand{
doLast{
def command = "git --version"
println command.execute().text
}
}
Above code run perfect, it will print the output from command "git --version".
But I try another command according to Check if a program exists from a Bash script
task printCommand{
doLast{
def command = "command -v docker"
println command.execute().text
}
}
It always show the wrong message like this.
Execution failed for task ':printCommand'.
java.io.IOException: Cannot run program "command": error=2, No such file or directory
Why I can't use "command -v docker" in this way ?
Are there any better ways to check if a command exists in Gradle ?
command is a builtin bash command, not a binary.
groovy's String.execute will start a process. The binary that the process is started from has to be given fully qualified (e.g. "/usr/bin/docker --version") or must be found on your $PATH (or %PATH%)
Get back to the subject, I find the way to ensure that a command exists while using Gradle, this code can avoid Gradle script terminated by non-zero exitValue, and print the appropriate information.
task checkCommand{
doLast{
result = exec{
def command = "command -v docker"
ignoreExitValue = true
executable "bash" args "-l", "-c", command
}
if(result.getExitValue()==0){
println "Has Docker"
}else{
print "No Docker"
}
}
}
Update 2019/02/23
If you get this error:
Could not set unknown property 'result' for task ':checkCommand' of
type org.gradle.api.DefaultTask
Add def in front of result will fix this issue.
I wish to write some Haskell that calls an executable as part of its work; and install this on a nixOS host. I don't want the executable to be in my PATH (and to rely on that would disrupt the beautiful dependency model of nix).
If this were, say, a Perl script, I would have a simple builder that looked for strings of a certain format, and replaced them with the executable names, based upon dependencies declared in the .nix file. But that seems somewhat harder with the cabal-based building common to haskell.
Is there a standard idiom for encoding the paths to executables at build time (including during development, as well as at install time) within Haskell code on nix?
For the sake of a concrete example, here is a trivial "script":
import System.Process ( readProcess )
main = do
stdout <- readProcess "hostname" [] ""
putStrLn $ "Hostname: " ++ stdout
I would like to be able to compile run this (in principle) without relying on hostname being in the PATH, but rather replacing hostname with the full /nix/store/-inetutils-/bin/hostname path, and thus also gaining the benefits of dependency management under nix.
This could possibly be managed by using a shell (or similar) script, built using a replacement scheme as defined above, that sets up an environment that the haskell executable expects; but still that would need some bootstrapping via the cabal.mkDerivation, and since I'm a lover of OptParse-Applicative's bash completion, I'm loathe to slow that down with another script to fire up every time I hit the tab key. But if that's what's needed, fair enough.
I did look through cabal.mkDerivation for some sort of pre-build step, but if it's there I'm not seeing it.
Thanks,
Assuming you're building the Haskell app in Nix, you can patch a configuration file via your Nix expression. For an example of how to do this, have a look at this small project.
The crux is that you can define a postConfigure hook like this:
pkgs.haskell.lib.overrideCabal yourProject (old: {
postConfigure = ''
substituteInPlace src/Configuration.hs --replace 'helloPrefix = Nothing' 'helloPrefix = Just "${pkgs.hello}"'
'';
})
What I do with my xmonad build in nix1 is refer to executable paths as things like ##compton##/bin/compton. Then I use a script like this to generate my default.nix file:
#!/usr/bin/env bash
set -eu
packages=($(grep '##[^#]*##' src/Main.hs | sed -e 's/.*##\(.*\)##.*/\1/' | sort -u))
extra_args=()
for p in "${packages[#]}"; do
extra_args+=(--extra-arguments "$p")
done
cabal2nix . "${extra_args[#]}" \
| head -n-1
echo " patchPhase = ''";
echo " substituteInPlace src/Main.hs \\"
for p in "${packages[#]}"; do
echo " --replace '##$p##' '\${$p}' \\"
done
echo " '';"
echo "}"
What it does is grep through src/Main.hs (could easily be changed to find all haskell files, or to some specific configuration module) and pick out all the tags surrounded by## like ##some-package-name##. It then does 2 things with them:
passes them to cabal2nix as extra arguments for the nix expression it generates
post-processes nix expression output from cabal2nix to add a patch phase, which replaces the ##some-package-name## tag in the Haskell source file with the actual path to the derivation.2
This generates a nix-expression like this:
{ mkDerivation, base, compton, networkmanagerapplet, notify-osd
, powerline, setxkbmap, stdenv, synapse, system-config-printer
, taffybar, udiskie, unix, X11, xmonad, xmonad-contrib
}:
mkDerivation {
pname = "xmonad-custom";
version = "0.0.0.0";
src = ./.;
isLibrary = false;
isExecutable = true;
executableHaskellDepends = [
base taffybar unix X11 xmonad xmonad-contrib
];
description = "My XMonad build";
license = stdenv.lib.licenses.bsd3;
patchPhase = ''
substituteInPlace src/Main.hs \
--replace '##compton##' '${compton}' \
--replace '##networkmanagerapplet##' '${networkmanagerapplet}' \
--replace '##notify-osd##' '${notify-osd}' \
--replace '##powerline##' '${powerline}' \
--replace '##setxkbmap##' '${setxkbmap}' \
--replace '##synapse##' '${synapse}' \
--replace '##system-config-printer##' '${system-config-printer}' \
--replace '##udiskie##' '${udiskie}' \
'';
}
The net result is I can just write Haskell code and a cabal package file; I don't have to worry much about maintaining the nix package file as well, only re-running my generate-nix script if my dependencies change.
In my Haskell code I just write paths to executables as if ##the-nix-package-name## was an absolute path to a folder where that package is installed, and everything magically works.
The installed xmonad binary ends up containing hardcoded references to the absolute paths to the executables I call, which is how nix likes to work (this means it automatically knows about the dependency during garbage collection, for example). And I don't have to worry about keeping the things I called in my interactive environment's PATH, or maintaining a wrapper that sets up PATH just for this executable.
1 I have it set up as a cabal project that gets built and installed into the nix store, rather than having it dynamically recompile itself from ~/.xmonad/xmonad.hs
2 Step 2 is a little meta, since I'm using a bash script to generate nix code with an embedded bash script in it
This is not indented to be the answer but if I post this in comment section it would turn out to be ugly formatted.
Also I am not sure if this hack is the right way to do the job.
I notice that if I use nix-shell I can get full path to nix store
Assume hash is always the same, AFAIK I believe it is, you can use it to hard-coded in build recipe.
$ which bash
/run/current-system/sw/bin/bash
[wizzup# ~]
$ nix-shell -p bash
[nix-shell:~]$ which bash
/nix/store/wb34dgkpmnssjkq7yj4qbjqxpnapq0lw-bash-4.4-p12/bin/bash
Lastly, I doubt if you have to to any of this if you use buildInput, it should be the same path.
From reading this thread, it looks like its possible to use the shebang to run Rust *.
#!/usr/bin/env rustc
fn main() {
println!("Hello World!");
}
Making this executable and running does compile, but not run the code.
chmod +x hello_world.rs
./hello_world.rs
However this only compiles the code into hello_world.
Can *.rs files be executed directly, similar to a shell script?
* This references rustx, I looked into this, but its a bash script which compiles the script every time (without caching) and never removes the file from the temp directory, although this could be improved. Also it has the significant limitation that it can't use crates.
There's cargo-script. That also lets you use dependencies.
After installing cargo-script via cargo install cargo-script, you can create your script file (hello.rs) like this:
#!/usr/bin/env run-cargo-script
fn main() {
println!("Hello World!");
}
To execute it, you need to:
$ chmod +x hello.rs
$ ./hello.rs
Compiling hello v0.1.0 (file://~/.cargo/.cargo/script-cache/file-hello-d746fc676c0590b)
Finished release [optimized] target(s) in 0.80 secs
Hello World!
To use crates from crates.io, please see the tutorial in the README linked above.
This seems to work:
#!/bin/sh
//usr/bin/env rustc $0 -o a.out && ./a.out && rm ./a.out ; exit
fn main() {
println!("Hello World!");
}
I have written a tool just for that: Scriptisto. It is a fully language agnostic tool and it works with other compiled languages or languages that have expensive validation steps (Python with mypy).
For Rust it can also fetch dependencies behind the scenes or build entirely in Docker without having a Rust compiler installed. scriptisto embeds those templates into the binary so you can bootstrap easily:
$ scriptisto new rust > ./script.rs
$ chmod +x ./script.rs
$ ./script.rs
Instead of new rust you can do new docker-rust and the build will not require Rust compiler on your host system.
#!/bin/sh
#![allow()] /*
exec cargo-play --cached --release $0 -- "$#"
*/
Needs cargo-play. You can see a solution that doesn't need anything here:
#!/bin/sh
#![allow()] /*
# rust self-compiler by Mahmoud Al-Qudsi, Copyright NeoSmart Technologies 2020
# See <https://neosmart.net/blog/self-compiling-rust-code/> for info & updates.
#
# This code is freely released to the public domain. In case a public domain
# license is insufficient for your legal department, this code is also licensed
# under the MIT license.
# Get an output path that is derived from the complete path to this self script.
# - `realpath` makes sure if you have two separate `script.rs` files in two
# different directories, they get mapped to different binaries.
# - `which` makes that work even if you store this script in $PATH and execute
# it by its filename alone.
# - `cut` is used to print only the hash and not the filename, which `md5sum`
# always includes in its output.
OUT=/tmp/$(printf "%s" $(realpath $(which "$0")) | md5sum | cut -d' ' -f1)
# Calculate hash of the current contents of the script, so we can avoid
# recompiling if it hasn't changed.
MD5=$(md5sum "$0" | cut -d' ' -f1)
# Check if we have a previously compiled output for this exact source code.
if !(test -f "${OUT}.md5" && test "${MD5}" = "$(cat ${OUT}.md5)"); then
# The script has been modified or is otherwise not cached.
# Check if the script already contains an `fn main()` entry point.
if grep -Eq '^\s*(\[.*?\])*\s*fn\s*main\b*' "$0"; then
# Compile the input script as-is to the previously determined location.
rustc "$0" -o ${OUT}
# Save rustc's exit code so we can compare against it later.
RUSTC_STATUS=$?
else
# The script does not contain an `fn main()` entry point, so add one.
# We don't use `printf 'fn main() { %s }' because the shebang must
# come at the beginning of the line, and we don't use `tail` to skip
# it because that would result in incorrect line numbers in any errors
# reported by rustc, instead we just comment out the shebang but leave
# it on the same line as `fn main() {`.
printf "fn main() {//%s\n}" "$(cat $0)" | rustc - -o ${OUT}
# Save rustc's exit code so we can compare against it later.
RUSTC_STATUS=$?
fi
# Check if we compiled the script OK, or exit bubbling up the return code.
if test "${RUSTC_STATUS}" -ne 0; then
exit ${RUSTC_STATUS}
fi
# Save the MD5 of the current version of the script so we can compare
# against it next time.
printf "%s" ${MD5} > ${OUT}.md5
fi
# Execute the compiled output. This also ends execution of the shell script,
# as it actually replaces its process with ours; see exec(3) for more on this.
exec ${OUT} $#
# At this point, it's OK to write raw rust code as the shell interpreter
# never gets this far. But we're actually still in the rust comment we opened
# on line 2, so close that: */