WSL2 distro shell can't launch a file copied from outside - linux

The situation in short
I can't launch an executable (binary or a script) in a WSL2 distro if it wasn't created inside this distro
I can launch scripts and binaries that were created inside the distro shell (not using /mnt/c or /mnt/d in any way)
But I can't launch anything that was created outside and copied inside from Windows (using /mnt/c or /mnt/d)
I can see the copied files in the file system, can "cat" them, can look them up with "which", but I cannot launch them by entering the path into the command line
The questions I have in regards to all this
How come that the shell can't see the files while utils you run from the shell can?
How do I make the shell see files that were copied from outside?
If I can't make the shell launch the files, then how do I launch them?
The Situation in detail
I have Windows 10 with WSL2 and two distros
Ubuntu-20.04
Alpine
In Ubuntu I have a "Hello, World!" project written in C
It compiles in Ubuntu and then and runs in Ubuntu just fine
But, when I copy it from Ubuntu to Windows
cp hello /mnt/d/
and then go to Alpine and copy it inside from Windows
cp /mnt/d/hello .
I then have trouble launching it inside Alpine
Here is the output of file hello command in Ubuntu with some extra formatting (just in case)
$ file hello
hello:
ELF 64-bit LSB shared object,
x86-64,
version 1 (SYSV),
dynamically linked,
interpreter /lib64/ld-linux-x86-64.so.2,
BuildID[sha1]=021352ab7bf244e340c3c42ce34225b74baa6618,
for GNU/Linux 3.2.0,
not stripped
Here's what I have in Alpine
$ cp /mnt/d/hello .
$ ls -l
-rwxr-xr-x 1 pavel pavel 16760 Apr 19 19:07 hello
$ ./hello
-ash: ./hello: not found
Now same with a script copied from Windows
Copy the script inside Alpine from Windows
$ cp /mnt/d/hello.sh .
Checking the contents
$ cat hello.sh
#!/bin/ash
echo Hello!
Setting the execute permission just in case
$ chmod agu+x hello.sh
Trying to run it
$ ./hello.sh
-ash: ./hello.sh: not found
But, I can launch the hello.sh by explicitly calling the ash tool and passing the script path as the argument
$ ash ./hello.sh
Hello!
At the same time, a script created inside Alpine runs just by entering it's path to the command line
$ cat << EOF > hello-local.sh
> #!/bin/ash
> echo Local hello!
> EOF
$ chmod agu+x hello-local.sh
$ ./hello-local.sh
Local hello!
Also, I couldn't make a file that would run from one that wouldn't either by copying it with cp
cp hello.sh hello2.sh
or by copying it with cat
cat hello.sh > hello3.sh
cmod agu+x hello3.sh
Why do I need to copy things from outside
It all started when I wanted to explore how Docker for Windows uses Linux namespaces to separate containers
The distro that Docker for Windows uses is called docker-desktop
The docker-desktop distro neither has utilities that I need for my experiments, nor a package manager to get those utilities
So I tried to copy them from outside
But now Docker for Windows studies is not the only concern
I want to understand this magic that is happening just as bad

To be fair, there really are three separate questions here, but not necessarily the questions you listed in your post:
Secondary question -- Why does your script that you copied to Alpine fail?
As #MarkPlotnick covered in the comments (and you confirmed), it was due to the script having DOS/Windows line endings (CRLF). In general, try to avoid creating or editing Linux text files using Windows tools unless you are sure that they are using Linux line-endings.
Secondary question -- Why does your C program fail when you compile on Ubuntu and copy the binary to Alpine?
Also as #MarkPlotnick mentioned in the comments, this is because Ubuntu uses glibc as the standard library implementation by default, but Alpine uses musl. See a number of questions here for more information. The first one in the list sorted by "relevance" is actually a pretty good one to start with.
Main question -- How to explore the docker-desktop distro
Really, your main goal seems to be how to gain access to certain tools inside the docker-desktop distro in order to learn more about it.
I was going to say, "don't" (with more explanation), but the reality is that I think it's a potentially good learning experience. I've done it, to some degree, so who am I to say it's "too dangerous" or recommend against it? ;-)
I will give fair warning, though -- The docker-desktop distro isn't intended to be run by users. Docker Desktop "injects" links and sockets into your other WSL2 distros (which you can enable/disable per-distro in Docker Desktop) so that its tools, processes, etc., are available to all your WSL2 (and PowerShell/CMD) instances.
I'd personally try to avoid making any changes to the docker-desktop distro itself. They'll likely be overwritten anyway by Docker Desktop when it extracts a new rootfs.
However, we can still gain access to the tools we need by accessing them from another distribution, but without copying them into docker-desktop.
First, a note -- As I think you have probably already figured out,docker-desktop is also musl-basesd. So you'll want to use tools from another musl-based distro like Alpine.
This can be easily accomplished by running the following line once in your Alpine instance (as root):
echo "/ /mnt/wsl/instances/Alpine none defaults,bind,X-mount.mkdir 0 0" >> /etc/fstab
That will add a mount to the Alpine instance into the tmpfs /mnt/wsl mount. You can see my Super User answer here for more details on that.
Once you wsl --terminate Alpine and restart it, you'll have access to the Alpine files from any other WSL2 distribution.
As a useful (for your intent) example, install the util-linux package in Alpine to get access to the lsns command.
Then, in the docker-desktop distro (which I assume you already know to access with wsl -u root -d docker-desktop, but I'll include that command here for other future readers), to list the namespaces:
/mnt/host/wsl/instances/Alpine/usr/bin/lsns
The docker-desktop instance automounts at a slightly different directory than default (see cat /etc/wsl.conf), so you need to adjust the path to /mnt/host/wsl instead of /mnt/wsl.
But with that in place, you can run all (most?) of your Alpine binaries directly in docker-desktop without having to modify it directly. If you have a script in your home directory that you want to run in docker-desktop, for instance:
/mnt/host/wsl/instances/Alpine/home/users/<yourusername>/hello.sh
Note that if you have a binary that requires a dynamically-linked library on Alpine, I'm assuming you'll need to adjust your LD_LIBRARY_PATH accordingly, although I haven't tested. For instance:
LD_LIBRARY_PATH=/mnt/host/wsl/instances/Alpine/usr/lib /mnt/host/wsl/intances/Alpine/usr/bin/<whatever>

Related

How do you get a launcher for firefox?

I hope that I'm tagging/asking on the correct page. I'm Using Linux Mint 6.0, but it could be OS independent.
So the used command for installing Firefox was
nix-env -iA nixpkgs.firefox-esr
When I type which firefox, I get:
/home/foo/.nix-profile/bin/firefox
So Linux Mint comes with Chrome preinstalled, which has a launcher, e.g. also in the start menu. How do I get that for firefox as well? I didn't find a tool to create such a launcher in Mint and I actually think, that nix should do that for me.
EDIT: I also found this page which seemed helpful and advertised e.g. the KDE Kickoff, but I wasn't able to get that one to run.
I can only speak for Ubuntu launchers, but other distros will have launcher files that will have a similar setup
TLDR, add ~/.nix-profile/share to XDG_DATA_DIRS env variable on login. Add the following to ~/.profile after nix loading commands
export XDG_DATA_DIRS=$HOME/.nix-profile/share:$XDG_DATA_DIRS
Explanation:
Installed packages via nix will have an immutable path in nix/store. ~/.nix-profile/bin/firefox is the derivation your current nix environment is linked to (if you update the firefox package, it'll point to the new one)
This means you can create a launcher file for that executable. Lets see if the firefox-esr derivation comes with a desktop launcher or not:
$ nix-build '<nixpkgs>' -A firefox-esr
This will build the package and give you a derivation path. For my current channel it is /nix/store/3iipcmiykgr4p34fg3rkicdz1bw584gm-firefox-102.2.0esr
If I check inside it, there is a .desktop file which defines Ubuntu launchers:
$ ls /nix/store/3iipcmiykgr4p34fg3rkicdz1bw584gm-firefox-102.2.0esr/share/applications
firefox.desktop
These files will also be available under ~/.nix-profile/share/applications so you can simply add that to XDG_DATA_DIRS env variable before boot
If an application did not have one, you can manually make one and add it under ~/.local/share/applications, then set the executable path to the nix one
So SuperSandro2000 explained in the comments, that firefox from nix ships with a .desktop file already. This can be easily added to the start menu and lies in
/nix/store/...-firefox-XXX.X/share/applications/firefox.desktop
If there is no such file included, the most direct way could be (imho) to just create a simple bash script:
#!/bin/bash
./home/foo/.nix-profile/bin/firefox & #Run Firefox
echo Firefox was started with PID $!
In order to make it runnable, enter chmod +x your_skript_name.sh. Afterwards, ./firefox 2> /dev/null & can be used instead to run it silently in the background.
You can also consider the developer/command line options for firefox (Archive) or this blog article here.
Maybe /usr/bin/menulibre is also the right application, it allows you to create .desktop files. This app can also be found by right-clicking on the start "menu".

Can Docker be used to run Linux CLI tools from macOS?

I am writing software on macOS. As a subroutine I would like to call certain Linux-only CLI tools, e.g., > mytool inputfile. Can I use Docker for Mac to compile the Linux tool inside a container and call it from outside the container (after copying input files into the container?). And if I can, is it a good idea or will there be issues installing and compiling Linux packages?
From my understanding of docker as basically a lightweight VM that uses a stripped down version of a Linux distribution, this approach seems to make sense, but the stripped down aspect might be an impediment.
Can Docker be used to run Linux CLI tools from macOS?
Docker supports macOS according to documentation.
Can I use Docker for Mac to compile the Linux tool inside a container and call it from outside the container (after copying input files into the container?
Yes.
And if I can, is it a good idea
Depends on the term "good" - it's subjective and highly depends on specific case.
or will there be issues installing and compiling Linux packages?
No.
From my understanding of docker as basically a lightweight VM
Yes.
that uses a stripped down version of a Linux distribution, this approach seems to make sense, but the stripped down aspect might be an impediment.
What is in docker container depends on the container. Overall, usually man pages and system package manager repository information are removed from images. I would disagree - mostly docker containers come with full Linux distributions and can be used as such.
You should do as follow:
docker run --rm -v /:/host -ti ubuntu ... your command referring to /host...
And this is the command parameters explanation:
--rm : remove sthe container after running (but keep cached the image for next calls).
-t : allocates a visibile shell terminal.
-i : runs in interactive mode.
-v /:/host : maps your root folder to container /host folder.
ubuntu : pulls the ubuntu image, which you can change with any other you prefer.
As last parameter put the commands to run into the container but relatives to /host.

Where does '~' expand to when mounted in docker with windows subsystem for linux?

I have a docker container I wrote that sets up AWS profiles for me. In Linux it works great, on WSL it partially works.
When I run the container I am mounting the ~/.aws directory, checking if the profiles exist and if they don't exist I create them. If they do exist I don't do anything.
In Linux I can run this container and then continue to use aws-cli with no problems.
In Windows subsystem for Linux - when I run the container the first time around, it will create the profiles for me. If I choose to run the container again it sees that the profiles already exist so it does nothing. This tells me the file exists somewhere but I cant use aws-cli because the file doesn't exist at ~/.aws.
So my question is where is ~/.aws in WSL when mounted to a docker container? I've attempted to do a find on the entire filesystem in WSL and that returns nothing. I've also tried changing the mount path to /root/.aws and I run into the same conditions.
EDIT:
I still don't know the answer to my question above. But if anyone comes across this question I did find a work around.
I've updated Docker Desktop to allow mounting the entire c:/ drive. Then I just changed my docker run command to mount c:/.aws instead of ~/.aws, so my command looks like -v c:/.aws:/root/.aws. After that I added this environment variable in WSL export AWS_SHARED_CREDENTIALS_FILE="/mnt/c/.aws/credentials" and now aws cli picks up on my profile changes.
The shell always expands ~ to the value of the HOME environment variable. If that environment variable is not set, then it expands to nothing. If you want to find where ~/.aws is located, then you can write something like echo ~/.aws and the shell will expand it for you.
The only exception is that ~user expands to the home directory of the user user; the HOME environment variable is not consulted there.
You have to remember that in your setup the docker engine (docker for windows) is installed on windows, it is inside the windows environment that the docker command is 'launched'. So when you say use ~/.aws it looks in the windows file system for this location.
In windows ~ is a valid directory name (try mkdir ~ from a cmd prompt) so when you say map ~/.aws I'm unsure what actually gets created. maybe try searching your c drive for a folder called ~. There is no ~ shortcut in windows for the home folder, and if there was which home would it be? the home of the logged in windows user? or the home inside WSL?
To make this work in WSL you need to pass ~/.aws to wslpath like this:
➜ echo $(wslpath ~/.aws)
/mnt/c/home/damo/.aws
But this location is the path according to WSL not windows you need to do it twice with the -w flag the second time
➜ echo $(wslpath -w $(wslpath ~/.aws))
C:\home\damo\.aws
which would make your final docker command look like this:
docker run -it -v $(wslpath -w $(wslpath ~/.aws)):/root/.aws awsprofileprocessor:latest
With this you will now be telling docker for windows the windows path to put the mount
Please let me know if this works for you, I'm interested in how it turns out.

Running Docker Image

The user guide states that an image should be run as follows:
docker run -t -i ubuntu /bin/bash
I get that -t creates the pseudo-terminal and -i makes it interactive. But it seems that the /bin/bash part is unnecessary. Whether I run it with or without /bin/bash, I'm given an interactive prompt that I can read and write from both times.
root#77eeb1f4ac2a:/#
Why do we need /bin/bash?
Part 2
I'm running on Docker for Mac. When I download the hello-world binary and run it, it's only 1kb. Obviously a Linux image wasn't downloaded with it. Is the small hello-world binary running off my Mac kernel or off of a small Linux kernel that comes with Docker for Mac?
Why do we need /bin/bash?
Because while the ubuntu image may be configured to run /bin/bash by default, that's not going to be true of every image. If you have an image that starts a webserver by default, and you want to run bash...you need to make that explicit. Some images don't specify any default command, leading to:
$ docker run -it alpine
docker: Error response from daemon: No command specified.
It never hurts to be explicit when starting a container, especially using an inmage that you didn't build yourself.
When I download the hello-world binary and run it...
Which hello-world binary?
but is a VM of Linux executing it or is my mac executing it?
Docker only runs under Linux. When you are using Docker under OS X or Windows, you are running containers inside a Linux VM spawned for that purpose by docker-machine (or, previously, boot2docker). Under Windows Docker uses Hyper V, and on OS X it previously used VirtualBox and in more recent versions may be using something else (it's been a while since I've run Docker under OS X).
Part 1:
Whatever you pass after docker run -t -i ubuntu is the first command that your container will run. You can try using /bin/bash, /bin/sh, or even echo hello and see it in action. Ubuntu uses bash by default, but other containers use other commands based on their Dockerfiles.
part 2:
When you run hello-world, a docker container is created from the hello-image. Containers "include the application and all of its dependencies --but share the kernel with other containers, running as isolated processes in user space on the host operating system.".
Hello-world in specific is created from scratch https://hub.docker.com/_/scratch/.

Running gvfs after building

I am trying to run a local build of gvfs. I have followed the Newcomers document to set up a working build environment, built gvfs from sources and am now trying to figure out how to run it.
The docs have instructions on running applications or the GNOME shell, which say I need to kill the current instance, then launch the newly-built binary with jhbuild run, as in:
$ killall gnome-weather
$ jhbuild run gnome-weather
or, in the case of the shell,
$ jhbuild run gnome-shell --replace
For gvfs, I see that it spawns a bunch of processes (all children of P1 running under my account), the first of them (lowest PID) being gvfsd. So I tried the following:
$ killall gvfsd
$ jhbuild run gvfs
Which gives me the error message:
jhbuild run: Unable to execute the command 'gvfs': [Errno 2] No such file or directory
If instead I try
$ jhbuild run gvfsd
I get the same message. Same when I try any of the above two with --replace.
Since gvfs is a daemon rather than an application, I searched around a bit and came across this post, which suggests launching daemons with
jhbuild run dbus-launch --exit-with-session name-of-daemon
No joy either... no matter whether I use gvfs or gvfsd for the name, I get the error message
Couldn't exec gvfs: No such file or directory
(reporting the name I specified in the command).
Is this the correct way to launch gvfs at all? If not, what is? If it is, how can I find out what's going wrong?
EDIT: Apparently, the code I intend to modify is part of the gvfs-mtp-volume-monitor binary – but essentially the same goes here. How do I launch my own version of the binary rather than the one that came with my OS distro?
jhbuild run can be used for gvfs in the same manner.
For gvfsd do the following:
jhbuild run ~/jhbuild/install/libexec/gvfsd -r
The -r switch tells gvfsd to replace any running version. gvfsd will also start gvfsd-fuse if it was built and you didn't disable it via a command-line switch.
You will also need to replace any volume monitors (and other processes you need), such as:
killall gvfs-mtp-volume-monitor
jhbuild run ~/jhbuild/install/libexec/gvfs-mtp-volume-monitor
Care must be taken with anything that is invoked over dbus:
Namespaces may change between versions. If that happened between the version shipped with your OS and the current one, the latter will not work unless you tweak your dbus config to reflect that.
If dbus is used to spawn processes, it will fall back to the binaries shipped with your OS. Again you would need to modify your dbus config (specifically .service entries) to point to your binaries.

Resources