i want to make my own haar classifier for detection of hand, so i was following the toutorial given at
Naotoshi Seo. In this tutorial various linux command are used and i don't have linux.
some command are:
$ find [image dir] -name '*.[image ext]' > [description file]
$ createsamples -info samples.dat -vec samples.vec -w 20 -h 20
so how can i use these command on my windows
The "find" command creates a text file with the names of your image files. You can do that by hand if you want.
The second one is not a standard Linux command, but, I assume, one of the tools you need to create the classifier. If you want to use Windows you will have to download the Windows version of the tools in that tutorial. He uses Cygwin so you will need to install that as well.
Another option is to download the Ubuntu Live distribution and install it on a USB memory stick. You can then boot from the USB stick, create your classifier in Linux, then go beck to Widows when you're done.
Related
The situation in short
I can't launch an executable (binary or a script) in a WSL2 distro if it wasn't created inside this distro
I can launch scripts and binaries that were created inside the distro shell (not using /mnt/c or /mnt/d in any way)
But I can't launch anything that was created outside and copied inside from Windows (using /mnt/c or /mnt/d)
I can see the copied files in the file system, can "cat" them, can look them up with "which", but I cannot launch them by entering the path into the command line
The questions I have in regards to all this
How come that the shell can't see the files while utils you run from the shell can?
How do I make the shell see files that were copied from outside?
If I can't make the shell launch the files, then how do I launch them?
The Situation in detail
I have Windows 10 with WSL2 and two distros
Ubuntu-20.04
Alpine
In Ubuntu I have a "Hello, World!" project written in C
It compiles in Ubuntu and then and runs in Ubuntu just fine
But, when I copy it from Ubuntu to Windows
cp hello /mnt/d/
and then go to Alpine and copy it inside from Windows
cp /mnt/d/hello .
I then have trouble launching it inside Alpine
Here is the output of file hello command in Ubuntu with some extra formatting (just in case)
$ file hello
hello:
ELF 64-bit LSB shared object,
x86-64,
version 1 (SYSV),
dynamically linked,
interpreter /lib64/ld-linux-x86-64.so.2,
BuildID[sha1]=021352ab7bf244e340c3c42ce34225b74baa6618,
for GNU/Linux 3.2.0,
not stripped
Here's what I have in Alpine
$ cp /mnt/d/hello .
$ ls -l
-rwxr-xr-x 1 pavel pavel 16760 Apr 19 19:07 hello
$ ./hello
-ash: ./hello: not found
Now same with a script copied from Windows
Copy the script inside Alpine from Windows
$ cp /mnt/d/hello.sh .
Checking the contents
$ cat hello.sh
#!/bin/ash
echo Hello!
Setting the execute permission just in case
$ chmod agu+x hello.sh
Trying to run it
$ ./hello.sh
-ash: ./hello.sh: not found
But, I can launch the hello.sh by explicitly calling the ash tool and passing the script path as the argument
$ ash ./hello.sh
Hello!
At the same time, a script created inside Alpine runs just by entering it's path to the command line
$ cat << EOF > hello-local.sh
> #!/bin/ash
> echo Local hello!
> EOF
$ chmod agu+x hello-local.sh
$ ./hello-local.sh
Local hello!
Also, I couldn't make a file that would run from one that wouldn't either by copying it with cp
cp hello.sh hello2.sh
or by copying it with cat
cat hello.sh > hello3.sh
cmod agu+x hello3.sh
Why do I need to copy things from outside
It all started when I wanted to explore how Docker for Windows uses Linux namespaces to separate containers
The distro that Docker for Windows uses is called docker-desktop
The docker-desktop distro neither has utilities that I need for my experiments, nor a package manager to get those utilities
So I tried to copy them from outside
But now Docker for Windows studies is not the only concern
I want to understand this magic that is happening just as bad
To be fair, there really are three separate questions here, but not necessarily the questions you listed in your post:
Secondary question -- Why does your script that you copied to Alpine fail?
As #MarkPlotnick covered in the comments (and you confirmed), it was due to the script having DOS/Windows line endings (CRLF). In general, try to avoid creating or editing Linux text files using Windows tools unless you are sure that they are using Linux line-endings.
Secondary question -- Why does your C program fail when you compile on Ubuntu and copy the binary to Alpine?
Also as #MarkPlotnick mentioned in the comments, this is because Ubuntu uses glibc as the standard library implementation by default, but Alpine uses musl. See a number of questions here for more information. The first one in the list sorted by "relevance" is actually a pretty good one to start with.
Main question -- How to explore the docker-desktop distro
Really, your main goal seems to be how to gain access to certain tools inside the docker-desktop distro in order to learn more about it.
I was going to say, "don't" (with more explanation), but the reality is that I think it's a potentially good learning experience. I've done it, to some degree, so who am I to say it's "too dangerous" or recommend against it? ;-)
I will give fair warning, though -- The docker-desktop distro isn't intended to be run by users. Docker Desktop "injects" links and sockets into your other WSL2 distros (which you can enable/disable per-distro in Docker Desktop) so that its tools, processes, etc., are available to all your WSL2 (and PowerShell/CMD) instances.
I'd personally try to avoid making any changes to the docker-desktop distro itself. They'll likely be overwritten anyway by Docker Desktop when it extracts a new rootfs.
However, we can still gain access to the tools we need by accessing them from another distribution, but without copying them into docker-desktop.
First, a note -- As I think you have probably already figured out,docker-desktop is also musl-basesd. So you'll want to use tools from another musl-based distro like Alpine.
This can be easily accomplished by running the following line once in your Alpine instance (as root):
echo "/ /mnt/wsl/instances/Alpine none defaults,bind,X-mount.mkdir 0 0" >> /etc/fstab
That will add a mount to the Alpine instance into the tmpfs /mnt/wsl mount. You can see my Super User answer here for more details on that.
Once you wsl --terminate Alpine and restart it, you'll have access to the Alpine files from any other WSL2 distribution.
As a useful (for your intent) example, install the util-linux package in Alpine to get access to the lsns command.
Then, in the docker-desktop distro (which I assume you already know to access with wsl -u root -d docker-desktop, but I'll include that command here for other future readers), to list the namespaces:
/mnt/host/wsl/instances/Alpine/usr/bin/lsns
The docker-desktop instance automounts at a slightly different directory than default (see cat /etc/wsl.conf), so you need to adjust the path to /mnt/host/wsl instead of /mnt/wsl.
But with that in place, you can run all (most?) of your Alpine binaries directly in docker-desktop without having to modify it directly. If you have a script in your home directory that you want to run in docker-desktop, for instance:
/mnt/host/wsl/instances/Alpine/home/users/<yourusername>/hello.sh
Note that if you have a binary that requires a dynamically-linked library on Alpine, I'm assuming you'll need to adjust your LD_LIBRARY_PATH accordingly, although I haven't tested. For instance:
LD_LIBRARY_PATH=/mnt/host/wsl/instances/Alpine/usr/lib /mnt/host/wsl/intances/Alpine/usr/bin/<whatever>
Hello i try to figure out how to install alpine linux, in diskless mode, on my remote dedicated server, without vnc access.
The server hoster just has a few images and a rescue system without a VNC option.
I already tried to boot the iso via grub-image boot but the alpine linux installation image doesn't have openssh installed, so i couldn't connect to the server to do the alpine-setup.
So i thought i can maybe edit the squashfs image.
In a Debian Live CD it's easy to unsquash the image, enable "PermitRootLogin= yes" and squash it again but with alpine linux i have absolutely no clue.
After this i tried to build with mkimage a custom alpine iso but i just don't know how to build properly i get "unable to load key file" and "$apks: unable to select package (or its dependencies)" error after the building.
(https://wiki.alpinelinux.org/wiki/How_to_make_a_custom_ISO_image_with_mkimage)
I used this code for the mkimage profile:
profile_nas() {
profile_standard
kernel_cmdline="unionfs_size=512M console=tty0 console=ttyS0,115200"
syslinux_serial="0 115200"
kernel_addons="zfs"
apks="\$apks openssh"
local _k _a
for _k in \$kernel_flavors; do
apks="\$apks linux-\$_k"
for _a in \$kernel_addons; do
apks="\$apks \$_a-\$_k"
done
done
apks="\$apks linux-firmware"
}
and this one to build
sh mkimage.sh --tag edge \
--outdir ~/iso \
--arch x86_64 \
--repository https://dl-cdn.alpinelinux.org/alpine/edge/main/ \
--profile nas
even if I'm able to generate the custom alpine linux iso i don't understand
this part of the guide (and if i would be able to understand this part i still wouldn't know how to enable remote ROOT access aka "PermitRootLogin= yes" in sshd_config):
Making packages available on boot
A package may be made available in the live system by defining the generation of an apkovl which contains a corresponding /etc/apk/world file, and adding that overlay definition to the mkimg-profile, e.g. with `apkovl="genapkovl-mkimgoverlay.sh"`
The definition may be done as in the genapkovl-dhcp.sh example. Copy the relevant parts (including the rc_add lines) into a `genapkovl-mkimgoverlay.sh` file and add the package(s) that should be installed in the live system on separate lines in the file contents for /etc/apk/world.
After this i tried to do a ssh in initramfs with dropbear-initramfs.
But it also doesn't work. With encrypted filesystems it always worked but to do this task i can't get a connection.
Does someone have a different idea how i can accomplish this task?
Debugging the Linux Kernel with kgdb over rs-232 needs several preparation steps. I found awesome documentation, but no single-source that is fully self-contained and summarizes all steps needed, does not explain for ages, and has been tested. And also covers Yocto.
Is there any source that covers all that is needed in one single and short description ?
I.e.:
What files are needed in the directory GDB is started from (e.g. kernel awareness, source, vmlinux) and how to get theese, where to put it ?
When and where to get a cross-gdb from ?
ALL kernel config options needed, also the not-obvious ones (like CONFIG_RANDOMIZE_BASE)
How to configure the serial ports
Explaining a working back and forth of breaking into debugee and debugger to get started.
Explaining one rock-solid option of stopping the kernel that runs everywhere.
Explaining how to get this done not only for PC-PC debugging, but also for Yocto targets.
Debugging the Linux Kernel via a Nullmodem-Cable:
It took me a while to get a kgdb connection with Linux kernel awareness fully running. I share my way of doing this with Ubuntu Eoan (optional: Yocto Warrior) in 2020 here:
Tested with:
Debugging a linux based Intel PC from an Intel MacBook running MacOS Catalina. Using the gdb from the Homebrew package "i386-elf-gdb“. (wituout „-tui“ option in GDB)
Debugging a linux based ARM target (i.mx6, Yocto) from a linux based Intel PC.
Prerequisites:
You need two computers and a serial nullmodem cable. Check the cable by firiing up a serial termianl (e.g. screen or putty) on both hosts, connecting to your serial port (e.g. /dev/ttyS0 or /dev/ttyUSB0) and print characters from each station to the other. Remember the /dev/tty ports you confirmed.
Preparation:
You need on the first debuggee computer, we call it „target":
Special kernel installed that contains symbols, kgdb support etc.
Learn how to compile and install a kernel and use in make menuconfig belows configuration. You can search for Sybmbols with F8 or the / key in menuconfig.
(E.g. wiki.ubuntu.com. There take care in the first paragraph to execute deb-src before apt-get :)
# CONFIG_SERIAL_KGDB_NMI is not set
CONFIG_CONSOLE_POLL=y
# CONFIG_DEBUG_INFO is not set
CONFIG_KGDB=y
CONFIG_KGDB_SERIAL_CONSOLE=y
# CONFIG_KGDB_TESTS is not set
# CONFIG_KGDB_KDB is not set
CONFIG_FRAME_POINTER=y
CONFIG_DEBUG_INFO=y
# CONFIG_DEBUG_INFO_REDUCED is not set
# CONFIG_DEBUG_INFO_SPLIT is not set
CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_GDB_SCRIPTS=y
CONFIG_STRIP_ASM_SYMS=y
# CONFIG_RANDOMIZE_BASE is not set
(Note for advanced Yocto use, skip if you're debugging a PC:
In yocto I created in my layer a file: recipes-kernel/linux/linux-mainline_%.bbappend with the content:
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI += "file://kgdb.cfg“
And in files/kgdb.cfg I added the config fragment shown above (without the on ARM unavailable options CONFIG_RANDOMIZE_BASE and CONFIG_FRAME_POINTER)
)
You need on the second debugger computer, we call it „debugger pc":
Full kernel source code, same code you used to compile the kernel above. (If you compiled the .o and .ko objects in place and not in a build-folder you better not copy the directory from the other pc, where you called make etc. in, but then better grab fresh sources again.)
vmlinux file containing the symbols (lies in the kernel source root, or build folder on the highest level after kernel make).
vmlinux-gdb.py file that was made during the kernel build (also lies at the same position on the highest level.).
All scripts in the folder scripts/gdb (Folder scripts in the same toplevel-position. If you use a dedicated build folder use the script folder from there, not from the source folder.)
(Advanced: If both computers don’t match in CPU, like Intel and Arm, a cross-gdb build. Ignore if you're on Intel/AMD.)
Note for advanced Yocto use, I did something like (ignore if you debug a PC):
bitbake -c patch virtual/kernel #(apply the changed kernel config from above)
bitbake -f -c compile virtual/kernel #(unpack is not sufficient because of vmlinux-gdb.py)
mkdir ~/gdbenv
cp -a tmp/work-shared/phyboard-mira-imx6-14/kernel-source/. ~/gdbenv
cp tmp/work/phyboard_mira_imx6_14-phytec-linux-gnueabi/linux-mainline/4.19.100-phy1-r0.0/build/vmlinux ~/gdbenv
cp tmp/work/phyboard_mira_imx6_14-phytec-linux-gnueabi/linux-mainline/4.19.100-phy1-r0.0/build/vmlinux-gdb.py ~/gdbenv
mkdir ~/gdbenv/scripts
cp -r tmp/work/phyboard_mira_imx6_14-phytec-linux-gnueabi/linux-mainline/4.19.100-phy1-r0.0/build/scripts/gdb ~/gdbenv/scripts
Then (ignore if you're on a PC)
yocto bitbake -c populate_sdk [my-image]
Then (still ignore on PC) install the sdk .sh-installation file from your deploy directory on the debugger pc and start the environment as guided by the output of the install script (remember that command), then use "$GDB" for starting the cross-gdb instead of „gdb".
Debug execution
Launch on the debugger two console screens:
Console 1, ssh: +++++++++++++++++++++++++++++++++++++++
ssh user#192.168.x.y
sudo -s
echo ttyS0,9600n8 > /sys/module/kgdboc/parameters/kgdboc
echo 1 > /proc/sys/kernel/sysrq
echo g > /proc/sysrq-trigger
Console 2, local: ++++++++++++++++++++++++++++++++++++++++
cd ~/gdbenv
gdb -tui ./vmlinux
add-auto-load-safe-path ~/gdbenv
source ~/gdbenv/vmlinux-gdb.py
set serial baud 9600
target remote /dev/ttyS0 (use the tty port you confirmed in the beginning)
b [name of the c funtion you want to debug]
cont
Back to console 1, ssh: +++++++++++++++++++++++++++++++++++++++
[Now trigger the function, e.g. sudo modprobe yourFancyKernelModule]
Back to console 2, local: ++++++++++++++++++++++++++++++++++++++++
Now use gdb functions, like bt, step, next, finish ...
You can also use linux-aware commands. Call "apropos lx“ in gdb for a list of commands.
I am trying to build the smallest possible linux image using the Yocto project. I would also like to have package management on the target to be able to add to and update parts of the running system.
I can enable the package management by adding this to my conf/local.conf:
EXTRA_IMAGE_FEATURES = "package-management"
Using rpm, that pulls in the smartpm package manager which is based on python which in turn makes the image to large. So I tried to use ipk packages but that still depends on python.
Does anyone have a good idea how to include package management in Yocto with the least possible overhead?
I can suggest you few things, which may help you to optimize size of rootfs:
Optimize as much as possible linux kernel binary and removed unnecessary packages (filesystem,device driver,networking etc).
$ bitbake -c menuconfig virtual/kernel //configure as per your requirement
$ bitbake -c savedefconfig virtual/kernel //savedefconfig
$ bitbake -f virtual/kernel
Configure Busybox and removed unused things:
$ bitbake -c menuconfig busybox
Remove those Distro features if not in use (and check more also): graphics [x11], sound [alsa], touchscreen [touchscreen], Multimedia. Change apply in conf/local.conf file. Example: DISTRO_FEATURES_remove = "x11 alsa touchscreen bluetooth opengl wayland "
Choose proper system init manager: systemd or sysvinit
Removed Unused Packages from the image. Example PACKAGE_EXCLUDE = "perl5 sqlite3 udev-hwdb bluez3 bluez4"
For small embedded device preferred PACKAGE_CLASSES = "package_ipk" and it is well optimized for small systems.
Looks like this is the best I can do.
PACKAGE_CLASSES = "package_ipk"
Then edit the recipe for opkg-utils to not depend on python. Will of course break the python utils, though.
I play to self-study 6.001 with the video lectures and lecture handouts. However, I have some problems setting up MIT Scheme in Ubuntu (intrepid).
I used package management and installed MIT-Scheme, but it's obviously the wrong version to use. It should be 7.5.1 instead of 7.7.90
I followed the instructions from this website (http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/6-001Spring-2005/Tools/detail/linuxinstall.htm)
So far, I've downloaded the tar file, and extracted to /usr/local. I have no idea what step 3 means.
Then I entered command
scheme -large -band 6001.com -edit
and the error is
Not enough memory for this configuration.
I tried to run under sudo mode, and this time the error is different
Unable to allocate process table.
Inconsistency detected
I have close to 1GB of free memory, with ample HDD space. What should I do to successfully set this up?
Step 3 means that you should type export MITSCHEME_6001_DIRECTORY=${your_problems_path}. If you don't want to type it every time you launch Scheme, you should put it as a string in your ~/.bash_profile file(in case you use bash)
About the problem itself, Google instantly suggests a solution:
sudo sysctl -w vm.mmap_min_addr=0(taken from http://ubuntuforums.org/showthread.php?p=4868292)
Instead of the package manager, you may also want to compile the portable C sources for Unix. I am using it happily.