Yocto: bitbake command to regenerate all RPM files - linux

I wanted to make some free space and deleted all directories in build/tmp/deploy/rpm/, thinking yocto would detect it and recreate them at the next bitbake call... it was a mistake ! :(
Here's the bitbake error just in case:
bitbake <image_name>
[...]
ERROR: ... do_rootfs: minicom not found in the base feeds (<image_name> corei7-64-intel-common corei7-64 core2-64 x86_64 noarch any all).
[...list of every package...]
Is there any way to force the regeneration of every rpms using bitbake ?
Forcing the regeneration with bitbake -f -c package_write_rpm <package> works, but I didn't find the command to do it all at once.
I tried cleaning the state of the native rpm packages thinking it might detect it and invalidate the rpm files states, but no luck:
bitbake -f -c cleanall nativesdk-rpm nativesdk-rpmresolve rpmresolve-native rpm-native
bitbake <image_name>
I also thought this would work, but it didn't:
bitbake -f -c package_write_rpm <image_name>
I will try to hack something with bitbake-layers show-recipes and xargs, but it would be cool to have a proper bitbake command.
I am using Yocto 2.1 (Krogoth).
Thanks !

I ended up doing the following script and use bitbake dependency tree to get the list of packages (thanks to this yocto/bitbake reference page):
# bitbake -g <image> && cat pn-depends.dot | grep -v -e '-native' | grep -v digraph | grep -v -e '-image' | awk '{print $1}' | sort | uniq | grep -v "}" | grep -v cross | grep -v gcc | grep -v glibc > packages-list.txt
# cat packages-list.txt | xargs bitbake -f -c package_write_rpm
Maybe there is a more straightforward solution ? For now this worked.

Related

Linux curl : no url found (or) curl: malformed url

So I am downloading docker setup on my linux vm, and have to run this command as part of the steps, but even though it mentions url, and I changed once -o to -O but still getting those errors, what to do for this?
this is the command im running
sudo curl -L $(curl -L https://api.github.com/repos/docker/compose/releases/latest | grep "browser_download_url" | grep "$(uname -s)-$(uname -m)\"" | sed -nr 's/\s+"browser_download_url":\s+"(https.*)"/\1/p') -o /usr/local/bin/docker-compose
The grep that is filtering what system you are running is outputting an upper case L in Linux, this may be the cause of your errors. Try this:
sudo curl -L $(curl -L https://api.github.com/repos/docker/compose/releases/latest | grep "browser_download_url" | grep -i "$(uname -s)-$(uname -m)\"" | sed -nr 's/\s+"browser_download_url":\s+"(https.*)"/\1/p') -o /usr/local/bin/docker-compose
Hope this helps!

Linux ssh output of two commands in an input

I am looking for a command that will let me look for an rpm installed in my machine and remove it, the only information about the rpm to delete that I have is a suffix of its name, that's what I want to do in one command:
rpm -qa | grep -i $rpmnameSuffix >> output
rpm -e output
Use command substitution to substitute the output of a command into another command line.
rpm -e "$(rpm -qa | grep -i $rpmnameSuffix)"

How to find all shared libraries actually used during execution in Linux?

I have an executable and I would like to find out which shared libraries were actually used during a specific run. I know ldd would list all the shared library dependencies of that executable but I would like to find out the subset of those that were actually used during a specific run*. Is this possible?
*what I mean with specific run is running the executable with certain input parameters that would cause only a small part of the code to be run.
You can use ltrace(1) for this:
$ PROG='ls -l'
# Collect call info
$ ltrace -o calls.txt -l '*' $PROG &> /dev/null
# Analyze collected data
$ cat calls.txt | sed -ne '/->/{ s/^\(.*\)->.*/\1/; p }' | sort -u
libacl.so.1
libcap.so.2
libc.so.6
libselinux.so.1
ls
# Compare with ldd
$ ldd /bin/ls | wc -l
10
You could use strace and grep for open .so files.
strace $MYPROG | grep -E '^open*\.so
lsof also should work to grep for open libraries.
lsof -p $PID | awk '{print $9}' | grep '\.so'
This assumes the shared libraries have .so extension

Most efficient way to get the latest version of an rpm via web

This is my attempt using wget to pull down the web page, dig for latest tar file and rerun a wget to take it down. In the example, i'm taking down pip.
wget https://pypi.org/project/pip/#files
wget $(grep tar.gz index.html | head -1 | awk -F= '{print $2}' | sed 's/>//' | sed 's/\"//g')
gunzip -c $(ls | grep tar |tail -1) | tar xvf -
yum install -y python-setuptools
cd $(ls -d */ | grep pip)
python setup.py install
cd ..
I'm sure that there is a better way, perhaps only using one wget or similar
Do you mean like that?
wget $(curl -s "https://pypi.org/project/pip/#files"|grep -o 'https://[^"]*tar\.gz')

How do you use wget to download most up to date file on a site?

Hello I am trying to use wget to download the most update to day McAfee patch and I am having issues singling out the .tar file. This is what I have:
wget -q -O - ftp://ftp.mcafee.com/pub/antivirus/datfiles/4.x/ | grep -o -m 2 "avvdat-[^\']*"
However when I run the above command it gives me:
avvdat-8065.tar">avvdat-8065.tar</a> (95191040 bytes)
avvdat-8066.tar">avvdat-8066.tar</a> (95385600 bytes)
When I need it to just be the most recent.tar file in between the <a> </a> which in this case would be avvdat-8066.tar. Can someone please help me out with greping the correct .tar I am not too good with regex or sed.
Try this,
wget $(wget -q -O - ftp://ftp.mcafee.com/pub/antivirus/datfiles/4.x/ | grep -Eo "ftp://[^\"\]+" | sort | tail -n1)
I'd suggest modifying your grep regex so it retrieves only the file name, then using sort to sort the results and tail to discard all but the last one.
wget -q -O - ftp://ftp.mcafee.com/pub/antivirus/datfiles/4.x/ | grep -o -m 2 "avvdat-[^\'\"]*" | sort | tail -1

Resources