I am generating the installer (rpm) for chromium browser to run on beaglebone black after cross compiling using ninja.
root#2b5071fb9adf:~/chromium/buildhost/src# ninja -C out/arm "chrome/installer/linux:unstable_rpm"
ninja: Entering directory `out/arm'
[15/16] ACTION //chrome/installer/linux:unstable_rpm(//build/toolchain/linux:clang_arm)
FAILED: chromium-browser-unstable-89.0.4339.0-1.armhf.rpm
python ../../build/gn_run_binary.py installer/rpm/build.sh -a armhf -c unstable -d chromium -o . -t linux -f
Expected permissions on etc to be 755, but they were 777
installer/rpm/build.sh failed with exit code 1
ninja: build stopped: subcommand failed.
This done on a docker container and followed the steps as mentioned in https://unix.stackexchange.com/questions/527627/compile-chromium-browser-for-arm-2019
Any directions in resolving the above error while generating the rpm/deb would be of great help.
Related
I am new to building C lib and have mostly worked with python. My goal is to take the source code from https://github.com/torvalds/linux and build a custom driver for USB/IP (https://github.com/torvalds/linux/tree/master/drivers/usb/usbip) module (some modification).
I copied only /tool/usbip/ assuming that USB and USB-IP are already present in the alpine.
I have set up a docker Image:
FROM alpine
COPY . .
RUN apk add build-base autoconf automake libtool eudev-dev libusb-dev
WORKDIR /tool/usbip/
RUN ./autogen.sh
RUN ./configure
RUN make install
I am getting the following error for make install:
Step 7/7 : RUN make install
---> Running in 48f53c225a99
Making install in libsrc
make[1]: Entering directory '/tool/usbip/libsrc'
CC libusbip_la-names.lo
In file included from names.c:23:
usbip_common.h:18:10: fatal error: linux/usb/ch9.h: No such file or directory
18 | #include <linux/usb/ch9.h>
| ^~~~~~~~~~~~~~~~~
compilation terminated.
make[1]: *** [Makefile:459: libusbip_la-names.lo] Error 1
make[1]: Leaving directory '/tool/usbip/libsrc'
make: *** [Makefile:500: install-recursive] Error 1
This could mean that alpine doesn't have a USB drive. How do I compile and install the driver in that docker?
Another solution can be to build the entire Linux code from the repo, but can I use alpine and add a USB-IP driver there, as I can see that alpine is very lightweight?
I see some Kconfig and Makefile, but I need guidance on building the required driver, as my task also requires modifying drivers/USB/usbip code and building the driver for usbip.
Some blog links or youtube videos on build drivers will also help, but I was not able to find any good resources online.
Updated docker file:
FROM alpine:latest
COPY . /linux
RUN apk add build-base autoconf \
automake libtool eudev-dev \
linux-headers flex bison gmp-dev \
mpc1-dev mpfr-dev
WORKDIR /linux
RUN zcat /proc/config.gz > .config
RUN make olddefconfig
RUN make modules_prepare
RUN make M=drivers/usb/usbip modules
WORKDIR /linux/tools/usb/usbip/
RUN ./autogen.sh
RUN ./configure
RUN make install
Download kernel sources and go to its root.
Copy .config for system you want to build for. E.g. if it is your running system and it provides /proc/config.gz then zcat /proc/config.gz > .config
make olddefconfig
Ensure CONFIG_USBIP and other modules are enabled as a modules:
$ grep CONFIG_USBIP .config
CONFIG_USBIP_CORE=m
CONFIG_USBIP_VHCI_HCD=m
CONFIG_USBIP_VHCI_HC_PORTS=8
CONFIG_USBIP_VHCI_NR_HCS=1
CONFIG_USBIP_HOST=m
# CONFIG_USBIP_DEBUG is not set
If not, run make nconfig (or make menuconfig), navigate to Device Drivers->USB support->USB/IP support and enable it as a module (<M>); save configuration.
make modules_prepare
make M=drivers/usb/usbip modules
Your modules are in drivers/usb/usbip/
I'm working on a Jetson Nano, and trying to install pytorch 1.4.0 onto it to run some toy experiments.
However, I'm running into a lot of trouble with this. After failing to leverage the prebuilt wheels, I've gone the way of building from scratch, but after a couple hours, it fails with the following error.
[3249/3931] Building NVCC (Device) object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_Unique.cu.o
FAILED: caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_Unique.cu.o
cd /home/workingdir/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda && /usr/bin/cmake -E make_directory /home/workingdir/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/. && /usr/bin/cmake -D verbose:BOOL=OFF -D build_configuration:STRING=Release -D generated_file:STRING=/home/workingdir/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/./torch_cuda_generated_Unique.cu.o -D generated_cubin_file:STRING=/home/workingdir/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/./torch_cuda_generated_Unique.cu.o.cubin.txt -P /home/workingdir/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_Unique.cu.o.Release.cmake
Killed
CMake Error at torch_cuda_generated_Unique.cu.o.Release.cmake:281 (message):
Error generating file
/home/workingdir/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/./torch_cuda_generated_Unique.cu.o
Does anyone know how to interpret this? Did I run out of memory/swap space?
Additionally, if anyone knows of an easier way to get pytorch>=1.1.0 on my nano, any tips would be appreciated :)
I followed this thread here both for the prebuilt installation and the scratch installation: https://forums.developer.nvidia.com/t/pytorch-for-jetson-nano-version-1-5-0-now-available/72048
I am trying to avoid entering the same commands in each GDB sessions. For this, I have followed the instructions in rust discovery book but the program is not working as mentioned in the book when I run the program through cargo run it is giving the following error:
ts/project/discovery/src/06-hello-world$ cargo run
error: could not load Cargo configuration
cargo run --target thumbv7em-none-eabihf
Finished dev [unoptimized + debuginfo] target(s) in 0.04s
Running `arm-none-eabi-gdb -q -x openocd.gdb /home/jawwad-turabi/Documents/project/discovery/target/thumbv7em-none-eabihf/debug/led-roulette`
error: could not execute process `arm-none-eabi-gdb -q -x openocd.gdb /home/jawwad-turabi/Documents/project/discovery/target/thumbv7em-none-eabihf/debug/led-roulette` (never executed)
Caused by:
No such file or directory (os error 2)
My openocd.gdb file contains these content:
target remote: 3333
load
break main
continue
My config file contain these content:
[target.thumbv7em-none-eabihf]
runner = "arm-none-eabi-gdb -q -x openocd.gdb"
rustflags = [
"-C", "link-arg=-Tlink.x",
]
+[build]
+target = "thumbv7em-none-eabihf"
Please change runner = "arm-none-eabi-gdb -q -x openocd.gdb" to this
runner = "gdb-multiarch -q -x openocd.gdb".
Because, if you are using the Ubuntu 18.04 LTS version then this command will be used as the book mention.
Ubuntu 18.04 or newer / Debian stretch or newer
NOTE gdb-multiarch is the GDB command you'll use to debug your ARM
Cortex-M programs
Ubuntu 14.04 and 16.04
NOTE arm-none-eabi-gdb is the GDB command you'll use to debug your ARM
Cortex-M programs
While flashing the STM32F3, we have to connect to the respective GDB server. It may be arm-none-eabi-gdb, gdb-multiarch or gdb. You may have to try all the three.
Now, as far as your question is concerned, you have to use the same parameter in your openocd.gdb. In my case, I have successfully tried with arm-none-eabi-gdb. Remember, I am using rust on Windows 10.
I'm trying to use DKMS to build a module. My problem is that I cannot seem to make DKMS pass the right ARCH to 'make'. It keeps using the architecture of the my OS's kernel which is armv7l. But there's no map
/usr/src/linux/arch/armv7l
It needs to look inside
/usr/src/linux/arch/arm
I have tried passing -a arm, -k 4.4.21-v7+/arm as arguments to 'dkms build', but it doesn't pass that down to 'make'. Adding BUILD_EXCLUSIVE_ARCH="arm" to /usr/src/rtl8812AU-4.3.14/dkms.conf also makes no difference.
sudo dkms build -m ${DRV_NAME} -v ${DRV_VERSION} -k 4.4.21-v7+/arm
Kernel preparation unnecessary for this kernel. Skipping...
Building module:
cleaning build area....
'make'....(bad exit status: 2)
Error! Bad return status for module build on kernel: 4.4.21-v7+ (arm)
Consult /var/lib/dkms/rtl8812AU/4.3.14/build/make.log for more information.
cat /var/lib/dkms/rtl8812AU/4.3.14/build/make.log
DKMS make.log for rtl8812AU-4.3.14 for kernel 4.4.21-v7+ (arm)
Thu Sep 29 16:36:07 UTC 2016
make ARCH=armv7l CROSS_COMPILE= -C /lib/modules/4.4.21-v7+/build M=/var/lib/dkms/rtl8812AU/4.3.14/build modules
make[1]: Entering directory '/usr/src/linux'
Makefile:606: arch/armv7l/Makefile: No such file or directory
make[1]: No rule to make target 'arch/armv7l/Makefile'. Stop.
make[1]: Leaving directory '/usr/src/linux'
Makefile:1576: recipe for target 'modules' failed
make: [modules] Error 2
How to solve this?
Thank you already
I solved this problem on a Raspberry Pi 2 with Ubuntu Mate (16.04) by symlinking the arm directory:
sudo ln -s arm armv7l
Dirty hack, but it works :)
You can pass arch by -a, --arch like this:
dkms install rtl8188fu/1.0 -j 4-a arm
Read more on man page by running man dkms or find it here:
http://manpages.ubuntu.com/manpages/bionic/man8/dkms.8.html
I was installing OpenCV on my Linux Mint Qiana system using this article.
So I downloaded OpenCv-3.0.0-beta from the official OpenCV website and followed the instructions.
I keep getting the error:
CMake Error: The source directory "/home/himanshi" does not appear to
contain CMakeLists.txt. Specify --help for usage, or press the help
button on the CMake
On typing this:
cmake -D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_IPP=OFF -D CMAKE_INSTALL_PREFIX=/usr ..
What do I do now?
The .. at the end of your call tells CMake to look for the source code and especially the CMakeLists.txt which contains the configure instructions. You have to go to the directory where you have put the OpenCV files, create a build directory there, step into that directory and repeat the command. Than .. matches your source directory.