bitbake do_fetch[nostamp] = "1" has no effect - linux

I am trying to install the compiled firmware for a microprocessor in my yocto image. This firmware is sent to a microprocessor on startup. The compiler for this firmware only runs on windows, therefore it is not possible to just clone the git repository and compile during build. I get this firmware file from the artifacts of a GitLab CI CD pipeline.
For debugging purposes, i would like to download this firmware file from the master branch, everytime that I build the image.
I previously worked with a custom do_fetch task and had set do_fetch[nostamp] = "1". This worked, but I ran into trouble when curl just put the 404 Error into the file. I have now tried to switch to wget and SRC_URI, but even though the nostamp is still set, it generates a .done file. It now never downloads the file again.
Here is the Recipe file I use:
S = "${WORKDIR}/"
PRIVATE_TOKEN = "xxxxxxxxxxx" # redacted
PROJECT_ID = "224"
VERSION = "2.00.01"
TAG = "master"
DOWNLOAD_FILE_PATH = "Production/firmware.bin"
INSTALL_NAME = "my-firmware-file.bin"
VERSION_FILE = "version.txt"
URL = "https://git.mycompany.com/api/v4/projects/${PROJECT_ID}/jobs/artifacts/${TAG}/raw/${DOWNLOAD_FILE_PATH}?job=publish_executable"
FETCHCMD_wget = "/usr/bin/env wget --header "PRIVATE-TOKEN: ${PRIVATE_TOKEN}""
BB_STRICT_CHECKSUM = "0"
SRC_URI = "${URL};downloadfilename=${INSTALL_NAME}"
do_fetch[nostamp] = "1"
do_patch(){
echo ${VERSION} > ${VERSION_FILE}
}
do_install(){
install -d ${D}${libdir}/test-dir/${PN}
# Install binary from artifacts
install -m 644 ${S}${INSTALL_NAME} ${D}${libdir}/test-dir/${PN}/${INSTALL_NAME}
# Instal version file created from tag
install -m 644 ${S}${VERSION_FILE} ${D}${libdir}/test-dir/${PN}/${VERSION_FILE}
}
FILES_${PN} += "${libdir}/test-dir/${PN}"
After running the recipe (either bitbake <PN> or bitbake <PN> -c fetch -f) I can run ls -la build/download | grep my-firmware-file and it shows two entries:
-rw-rw-r-- 1 root root 613924 Nov 11 11:50 my-firmware-file.bin
-rw-rw-r-- 1 root root 463 Nov 14 14:26 my-firmware-file.bin.done
As you see, it generated a .done file, that is newer than the last download of the actual file. Even when changing the TAG variable, no new file is downloaded. However for some reason, it always puts the correct version into the version.txtfile.
What am I missing? I could run a cleanall before every build, but that does not seem like a permanent solution. Also, the endgoal is to version the recipe and use the package version to pull from the correct release, but currently it pulls from master, so this does not make much sense.
Update: I checked the logfiles, and it seems wget is not the problem, as the command is never even executed. The first time after a clean, the log for do_fetch looks like this:
DEBUG: Executing python function extend_recipe_sysroot
NOTE: Direct dependencies are []
NOTE: Installed into sysroot: []
NOTE: Skipping as already exists in sysroot: []
DEBUG: Python function extend_recipe_sysroot finished
DEBUG: Executing python function do_fetch
DEBUG: Executing python function base_do_fetch
DEBUG: Trying PREMIRRORS
DEBUG: Trying Upstream
DEBUG: Fetching https://git.mycompany.com/api/v4/projects/224/jobs/artifacts/master/raw/Production/firmware.bin?job=publish_executable;downloadfilename=my-firmware-file.bin using command '/usr/bin/env wget -r --header "PRIVATE-TOKEN: xxxxxxxxxxxxxx" -O /home/.../build/downloads/my-firmware-file.bin.tmp -P /home/.../build/downloads 'https://git.mycompany.com/api/v4/projects/224/jobs/artifacts/master/raw/Production/firmware.bin?job=publish_executable''
DEBUG: Fetcher accessed the network with the command /usr/bin/env wget -r --header "PRIVATE-TOKEN: xxxxxxxxxxxxxx" -O /home/.../build/downloads/my-firmware-file.bin.tmp -P /home/.../build/downloads 'https://git.mycompany.com/api/v4/projects/224/jobs/artifacts/master/raw/Production/firmware.bin?job=publish_executable'
DEBUG: Running export PSEUDO_DISABLED=1; export DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1005/bus"; export PATH="..."; export HOME="/home/..."; /usr/bin/env wget -r --header "PRIVATE-TOKEN: xxxxxxxxxxxxxx" -O /home/.../build/downloads/my-firmware-file.bin.tmp -P /home/.../build/downloads 'https://git.mycompany.com/api/v4/projects/224/jobs/artifacts/master/raw/Production/firmware.bin?job=publish_executable' --progress=dot -v
WARNING: combining -O with -r or -p will mean that all downloaded content
will be placed in the single file you specified.
--2022-11-18 07:01:11-- https://git.mycompany.com/api/v4/projects/224/jobs/artifacts/master/raw/Production/firmware.bin?job=publish_executable
Resolving git.mycompany.com (git.mycompany.com)... <IP>
Connecting to git.mycompany.com (git.mycompany.com)|<IP>|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 613924 (600K) [application/octet-stream]
Saving to: ‘/home/.../build/downloads/my-firmware-file.bin.tmp’
2022-11-18 07:01:11 (70.4 MB/s) - ‘/home/.../build/downloads/my-firmware-file.bin.tmp’ saved [613924/613924]
FINISHED --2022-11-18 07:01:11--
Total wall clock time: 0.5s
Downloaded: 1 files, 600K in 0.008s (70.4 MB/s)
WARNING: Missing checksum for '/home/.../build/downloads/my-firmware-file.bin', consider adding at least one to the recipe:
SRC_URI[sha256sum] = "83aa3c373228b48eea58964b4ffec7ad42226014351d18bed12cb9b5eb3d261e"
DEBUG: Python function base_do_fetch finished
DEBUG: Python function do_fetch finished
When I then run do_fetch again (even when changing the TAG variable, so SRC_URI changes), the log looks as follows:
DEBUG: Executing python function extend_recipe_sysroot
NOTE: Direct dependencies are []
NOTE: Installed into sysroot: []
NOTE: Skipping as already exists in sysroot: []
DEBUG: Python function extend_recipe_sysroot finished
DEBUG: Executing python function do_fetch
DEBUG: Executing python function base_do_fetch
WARNING: Missing checksum for '/home/.../build/downloads/my-firmware-file.bin', consider adding at least one to the recipe:
SRC_URI[sha256sum] = "83aa3c373228b48eea58964b4ffec7ad42226014351d18bed12cb9b5eb3d261e"
WARNING: Missing checksum for '/home/.../build/downloads/my-firmware-file.bin', consider adding at least one to the recipe:
SRC_URI[sha256sum] = "83aa3c373228b48eea58964b4ffec7ad42226014351d18bed12cb9b5eb3d261e"
DEBUG: Python function base_do_fetch finished
DEBUG: Python function do_fetch finished
Notice the duplicated missing checksum warning in the second log. I also tried to delete only the ./downloads/my-firmware-file.bin.done file, but it still is not redownloaded, the log is the same as the second one.

try to add -r flag to force wget to re-download even if the file exists:
FETCHCMD_wget = "/usr/bin/env wget -r --header "PRIVATE-TOKEN: ${PRIVATE_TOKEN}""

I have tried a lot of things, and by now I feel like it is a bug in bitbake. For the moment I implemented this workaround, that seems to work fine. It deletes the downloaded file as well as the .done file before the execution of do_fetch.
# Deletes the old, cached version of the firmware file
do_fetch_prepend(){
import os
fw_file = f"{d.getVar('DL_DIR')}/{d.getVar('INSTALL_NAME')}"
done_file = f"{fw_file}.done"
if os.path.isfile(fw_file):
os.remove(fw_file)
if os.path.isfile(done_file):
os.remove(done_file)
}
I hope to soon be able to correctly version my recipe, so I don't have to rely on such hacks.

Related

Is there a solution for the odd error of bitbake?

When I used yocto to build my first linux system and after 'bitbake imx-image-multimedia' was excuted, I faced the odd error:
ERROR: gnu-config-native-20190501+gitAUTOINC+b98424c249-r0 do_unpack: Unpack failure for URL: 'git://git.savannah.gnu.org/config.git'. No up to date source found: clone directory not available or not up to date: /home/admin/Linux/Yocto/fsl/downloads//git2/git.savannah.gnu.org.config.git; shallow clone not enabled
ERROR: Logfile of failure stored in: /home/admin/Linux/Yocto/fsl/build/tmp/work/x86_64-linux/gnu-config-native/20190501+gitAUTOINC+b98424c249-r0/temp/log.do_unpack.73483
ERROR: Task (virtual:native:/home/admin/Linux/Yocto/fsl/sources/poky/meta/recipes-devtools/gnu-config/gnu-config_git.bb:do_unpack) failed with exit code '1'
Curious about the logfile, I opened /home/admin/Linux/Yocto/fsl/build/tmp/work/x86_64-linux/gnu-config-native/20190501+gitAUTOINC+b98424c249-r0/temp/log.do_unpack.73483 and I see:
DEBUG: Executing python function do_unpack
DEBUG: Executing python function base_do_unpack
DEBUG: Running 'export PSEUDO_DISABLED=1; unset _PYTHON_SYSCONFIGDATA_NAME; export SSH_AUTH_SOCK="/run/user/0/vscode-ssh-auth-sock-7925763"; export PATH="/home/admin/Linux/Yocto/fsl/sources/poky/scripts/native-intercept:/home/admin/Linux/Yocto/fsl/sources/poky/scripts:/home/admin/Linux/Yocto/fsl/build/tmp/work/x86_64-linux/gnu-config-native/20190501+gitAUTOINC+b98424c249-r0/recipe-sysroot-native/usr/bin/x86_64-linux:/home/admin/Linux/Yocto/fsl/build/tmp/work/x86_64-linux/gnu-config-native/20190501+gitAUTOINC+b98424c249-r0/recipe-sysroot-native/usr/bin:/home/admin/Linux/Yocto/fsl/build/tmp/work/x86_64-linux/gnu-config-native/20190501+gitAUTOINC+b98424c249-r0/recipe-sysroot-native/usr/sbin:/home/admin/Linux/Yocto/fsl/build/tmp/work/x86_64-linux/gnu-config-native/20190501+gitAUTOINC+b98424c249-r0/recipe-sysroot-native/usr/bin:/home/admin/Linux/Yocto/fsl/build/tmp/work/x86_64-linux/gnu-config-native/20190501+gitAUTOINC+b98424c249-r0/recipe-sysroot-native/sbin:/home/admin/Linux/Yocto/fsl/build/tmp/work/x86_64-linux/gnu-config-native/20190501+gitAUTOINC+b98424c249-r0/recipe-sysroot-native/bin:/home/admin/Linux/Yocto/fsl/sources/poky/bitbake/bin:/home/admin/Linux/Yocto/fsl/build/tmp/hosttools"; export HOME="/root"; git -c core.fsyncobjectfiles=0 branch --contains b98424c249119b79d3f709e26eb86f2fd4d5e5f3 --list master 2> /dev/null | wc -l' in /home/admin/Linux/Yocto/fsl/downloads//git2/git.savannah.gnu.org.config.git
ERROR: Unpack failure for URL: 'git://git.savannah.gnu.org/config.git'. No up to date source found: clone directory not available or not up to date: /home/admin/Linux/Yocto/fsl/downloads//git2/git.savannah.gnu.org.config.git; shallow clone not enabled
DEBUG: Python function base_do_unpack finished
DEBUG: Python function do_unpack finished
What is 'ERROR: Unpack failure for URL: git://git.savannah.gnu.org/config.git'. No up to date source found: clone directory not available or not up to date: /home/admin/Linux/Yocto/fsl/downloads//git2/git.savannah.gnu.org.config.git; shallow clone not enabled means?
What can I do to fix it? Thanks!
See first if this is similar to this thread
Currently, for my Yocto builds (for nxp and other boards) I used to share a same "downloads" DL_DIR to avoid unnecessary fetch operations.
I tried to use an empty DL_DIR...and it worked fine.
After investigating, I found out there is something wrecked in the "git2" sub-directory of DL_DIR.
I don't know what exactly.
So if you have a custom DL_DIR with a lot of stuff, try to rename your "git2" subdir as "git2.bak".
Check also if the // seen in /home/admin/Linux/Yocto/fsl/downloads//git2/git.savannah.gnu.org.config.git means an empty environment variable, meaning check if there should be an intermediate folder between downloads/ and /git2.
I've recently encountered this problem. I had an empty do_fetch function
do_fetch(){
:
}
. Just by removing it, the git repo was cloned properly.

Bazel sh_test doesn't find node

I am trying to run a script which needs node. I have node installed in my machine.
I can run sh_binary by bazel run //:sh_bin and the script runs node just fine:
sh_binary(
name = "sh_bin",
data = [
],
srcs = [":script.sh"],
)
script.sh:
node -v
bazel run //:sh_bin:
v14.17.6
Now I want to convert this to sh_test:
sh_test(
name = "sh_bin",
data = [
],
srcs = [":script.sh"],
)
but now bazel test //:sh_bin cannot find node:
node: command not found
I also tried to add local = True to the test and still the same issue.
Bazel tests are run in a more controlled environment than application run via bazel run. One of the initial conditions that the test runner establishes is the value of $PATH: https://docs.bazel.build/versions/main/test-encyclopedia.html#initial-conditions
If you are working with remote execution, another problem could be that your test is executed on a machine that does not have node installed.
It's always a great idea to strive for a hermetic build that runs and tests independent of the host's state. That means you'd need to make the node program available to your binary or test as a data dep.
A good alternative is to build on existing work such as https://github.com/bazelbuild/rules_nodejs.
That being said, your example actually works for me.
cd `mktemp -d`
touch WORKSPACE
echo "node -v" > script.sh
chmod +x script.sh
cat <<EOF > BUILD
sh_test(
name = "sh_bin",
srcs = [":script.sh"],
)
EOF
bazel test --test_output=all -- //:sh_bin
Starting local Bazel server and connecting to it...
INFO: Analyzed target //:sh_bin (24 packages loaded, 282 targets configured).
INFO: Found 1 test target...
INFO: From Testing //:sh_bin:
==================== Test output for //:sh_bin:
v17.1.0
================================================================================
Target //:sh_bin up-to-date:
bazel-bin/sh_bin
INFO: Elapsed time: 6.895s, Critical Path: 0.10s
INFO: 5 processes: 3 internal, 2 linux-sandbox.
INFO: Build completed successfully, 5 total actions
//:sh_bin PASSED in 0.0s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 5 total actions

gitlab API to download archive file in git gives bad file but good when called from local machine

I'm trying to retrieve a build file using the gitlab API. This file was created and stored as an artifact from an upstream pipeline. Running
curl -o download --location --header 'PRIVATE-TOKEN:{MY_API_TOKEN}' https://gitlab.foo.com/api/v4/projects/{PROJECT_ID}/jobs/artifacts/{REF_BRANCH}/download?job={JOB_NAME}
on my local machine gives me a proper build file once I run unzip download. However in the runner, the same command returns a much smaller file which I can't unzip. I've checked that the environment variables that are passed in the runner are right.
job in .gitlab-ci.yml
deploy_production_environment:
stage: deploy_prod
image:
name: banst/awscli
script:
- apk --no-cache add curl
- apk add unzip
- echo $JOB_ID
- echo $FE_BUILD_TOKEN
- echo "https://gitlab.foo.com/api/v4/projects/${PROJECT_ID}/jobs/artifacts/${CI_COMMIT_REF_NAME}/download?job=build_prod"
- aws configure set region us-east-1
- "curl -o download --location --header 'PRIVATE-TOKEN:${FE_BUILD_TOKEN}' https://gitlab.foo.com/api/v4/projects/${PROJECT_ID}/jobs/artifacts/${CI_COMMIT_REF_NAME}/download?job=build_prod"
- ls -l
- unzip download
- aws s3 cp build s3://$S3_BUCKET_PROD --recursive
gitlab job output:
`
output from my local terminal:
Why does the API call from inside the runner consistently result in this much smaller (corrupted?) file while the same call pulls the zip file down correctly on my local machine?
The first check to do when a curl brings back a "small" file it to read its content.
Often, the file is not so much corrupted but includes a text-based error message in it, which can give a clue as to the actual issue.
Adding -v to the curl command can also help illustrating the issue during the curl process (when executed in the context of the GitLab job).
Thank you to VonC for the debugging help, recommending the -v flag to the curl command. It turns out that the single quotes around 'PRIVATE-TOKEN:${FE_BUILD_TOKEN}' prevented the variable from being parsed to its correct string value which was giving a 401 'Permission Denied' error. Removing the single quotes did the trick.

Haskell installation in docker container using stack failing: too many open files

I have a simple Dockerfile
FROM haskell:8
WORKDIR "/root"
CMD ["/bin/bash"]
which I run mounting pwd folder to "/root". In my current folder I have a Haskell project that uses stack (funblog). I configured in stack.yml to use "lts-7.20" resolver, which aims to install ghc-8.0.1.
Inside the container, after running "stack update", I ran "stack setup" but I am getting "Too many open files in system" during GHC compilation.
This is my stack.yaml
flags: {}
packages:
- '.'
- location:
git: https://github.com/agrafix/Spock.git
commit: 2c60a48b2c0be0768071cc1b3c7f14590ffcc7d6
subdirs:
- Spock
- Spock-core
- reroute
- location:
git: https://github.com/agrafix/Spock-digestive.git
commit: 4c85647427e21bbaefbf04c4bc315d4bdfabba0e
extra-deps:
- digestive-bootstrap-0.1.0.1
- blaze-bootstrap-0.1.0.1
- digestive-functors-blaze-0.6.0.6
resolver: lts-7.20
One import note: I don't want to use Docker to deploy the app, just to compile it, i.e. as part of my dev process.
Any ideas?
Should I use another image without ghc pre-installed to use with docker? Which one?
update
Yes, I could use the built-in GHC in the container and it is a good idea, but wondered if there is any issue building GHC within Docker.
update 2
For anyone wishing to reproduce (on MAC OSX by the way), you can clone repo https://github.com/carlosayam/funblog and grab this commit 9446bc0e52574cc574a9eb5f2733f69e07b874ef
(I will probably move on using container's GHC)
By default, Docker for macOS limits number of file descriptors to avoid hitting macOS system-wide limits (default limit is 900). To increase the limit, follow these commands:
$ cd ~/Library/Containers/com.docker.docker/Data/database/
$ git reset --hard
HEAD is now at 9410b78 last-start-time changed at 1480947038
$ cat com.docker.driver.amd64-linux/slirp/max-connections
900
$ echo 1200 > com.docker.driver.amd64-linux/slirp/max-connections
$ git add com.docker.driver.amd64-linux/slirp/max-connections
$ git commit -s -m 'Update the maximum number of connections'
[master 227a248] Update the maximum number of connections
1 file changed, 1 insertion(+), 1 deletion(-)
Then check the notice messages by:
$ syslog -k Sender Docker
<Notice>: updating connection limit to 1200
To check how many files you got open, run: sysctl kern.num_files.
To check what's your current limit, run: sysctl kern.maxfiles.
To increase it system-wide, run: sysctl -w kern.maxfiles=20480.
Source: Containers become unresponsive due to "too many connections".
See also: Docker: How to increase number of open files limit.
On Linux, you can also try to run Docker with --ulimit, e.g.
docker run --ulimit nofile=5000:5000 <image-tag>
Source: Docker error: too many open files

R Command not recognized when submitted with SSH

I am submitting a shell script on a remote host that in turn submits an R script, but the error R: command not found or Rscript: command not found (depending whether I tried R CMD BATCH or Rscript).
I have tried submitting in the following ways:
ssh <remote-host> exec $HOME/test_script.sh
ssh <remote-host> `sh $HOME/test_script.sh`
The script test_script.sh contains (have tried Rscript as well):
#!/bin/sh
Rscript --no-save --no-restore $HOME/greetme.R
exit 0
The script greetme.R contains only cat("Hello\n").
The reason I am getting flustered is that when I log into the remote-host and submit the original script with sh $HOME/test_script.sh, it runs as intended.
The system specs and R versions for both the local and remote hosts are identical:
> R.version
_
platform x86_64-unknown-linux-gnu
arch x86_64
os linux-gnu
system x86_64, linux-gnu
status
major 3
minor 1.0
year 2014
month 04
day 10
svn rev 65387
language R
version.string R version 3.1.0 (2014-04-10)
nickname Spring Dance
Why is Linux refusing to recognize the commands?
I would prefer solutions using R CMD BATCH or Rscript but if there are known workarounds using littler or %R_TERM% I would like to hear them too.
I used this related question as reference, as well as the documents referenced in the comments: R.exe, Rcmd.exe, Rscript.exe and Rterm.exe: what's the difference?
EDIT for solution:
As #merlin2011 suggested, once I specified the full path in the test_script.sh, everything worked as intended:
#!/bin/sh
/opt/R/bin/Rscript --no-save --no-restore $HOME/greetme.R
exit 0
I got the path also by the provided suggestion:
$ which Rscript
/opt/R/bin/Rscript
It appears that you have a PATH issue, where R is not on your PATH when you try to run the command through ssh.
If you specify the full path to R and Rscript on the remote host, it should resolve the problem.
If you are not sure what the full path is, try logging into the server and running which R to get the path.

Resources