Is it possible to automount drive using wsl --mount after restart? - wsl-2

I am able to mount my ext4 partition in wsl-2 using the following command in powershell:
wsl --mount \\.\PHYSICALDRIVE4 --partition 1
However when I either restart my computer or run wsl --restart the partition is unmounted and I have to run the above command again. Is there a way of automounting the partition?
Thanks.

After asking around on the wsl github, this option is currently not supported however there is a work around solution which is below for anyone who needs this functionality.
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run" /V "Mount PhysicalDrive4" /t REG_SZ /F /D "C:\Windows\System32\wsl.exe --mount \\.\PHYSICALDRIVE4 --partition 1"
Also I forgot to say this functionality is only available for Windows Insiders preview build 20211 and above.
The solution proposed by #dopewind below doesn't work in this case as ext4 mounting in wsl-2 has to happen in powershell (with admin rights) and not in the installed linux distro.

Just add the command to the end of .bashrc file in your home directory in WSL ( or the .zshrc file if you use zsh )

Related

How can nodemon be made to work with WSL 2?

Ever since updating from WSL 1 to WSL 2 with the Windows 10 April 2020 update (and thereafter updating Ubuntu 18 to Ubuntu 20), I have not been able to get nodemon to hot reload when there are file changes in the project's directory. When I make any changes to .js files, there's no restarting of the server or output at the terminal:
I start my Node.js server with nodemon like this:
NODE_ENV=development DEBUG='knex:*' nodemon --verbose --inspect ./server.js"
And in case its useful, here is my server.js:
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server started and listening on port ${PORT}`);
});
I am not even sure how to troubleshoot this further to get more useful information about what's going on.
Root cause:
inotify is not fully supported in the 9P filesystem protocol on WSL2.
There are several github issues on the WSL project related to this, but perhaps the most relevant is #4739.
Possible Workarounds:
Try nodemon -L (a.k.a. --legacy-watch) as Simperfy suggested.
Try running from the default ext4 filesystem (e.g. mkdir -p $HOME/Projects/testserver). Note that a symlink to the Windows filesystem will still not work. As a bonus, the WSL ext4 filesystem will be much faster for file intensive operations like git.
You can still access the source from Windows editors and tools through \\wsl$\.
Use Visual Studio Code with the Remote-WSL extension to edit your source on the Windows filesystem. The easiest way to do this is by navigating in WSL to your project directory and running code ..
Visual Studio Code's WSL integration does trigger inotify for some reason.
Downgrade the session to WSL1 if you don't need any of the WSL2 features. I keep both WSL1 and WSL2 sessions around. The best way to do this is to create a backup of the session with wsl --export and wsl --import. You can switch the version of a WSL distro at any point with wsl --set-version.
I did test this on WSL1 with a sample project under the Windows filesystem, and editing via something as basic as notepad.exe under Windows still triggered nodemon to restart.
Longer answer:
nodemon worked "out of the box" for me on WSL2 on the root (/) ext4 mount (e.g. $HOME/src/testserver).
It also worked correctly when I tried it under the default /mnt/c mount that WSL/WSL2 creates. Of course, /mnt/c is much slower under WSL2. Edit - It turns out that I was using Visual Studio Code when I attempted this. Editing from other Windows apps on the Windows filesystem did not trigger nodemon to restart.
But looking at the first line of your screenshot, I see that you are running this from /c/Users/.... I'm thinking maybe you created this (perhaps CIFS) mount to try to work around the WSL2 performance issues - It's a common workaround.
I didn't set up a CIFS mount, but I was able to reproduce your problem by mounting with (substituting your Windows username):
mkdir $HOME/mnttest
sudo mount -t drvfs 'C:' $HOME/mnttest
cd $HOME/mnttest/Users/Raj/Projects/testserver
Running nodemon from this mount failed in the same manner that you describe -- Changes to the source did not trigger a restart.
However, running with nodemon -L on this mount did trigger a restart when source files were changed.
It also may be possible to fix the problem by mounting with different options, but I'm just not sure at this point. Edit - Seems unlikely, given the bug reports on this on Github.
Also, you may want to create some exports/backups of your WSL sessions. It's too late at this point (for your previous install), but you could have run wsl.exe --export to create a backup of the Ubuntu 18.04/WSL1 filesystem before upgrading. You can also change the version of a particular distribution with wsl.exe --set-version. This could give you some better "before/after" test comparisons.
I am using the WSL 2 and I solved the issue by adding the following env variable: CHOKIDAR_USEPOLLING=true.
This is how looks like my nodemon command:
CHOKIDAR_USEPOLLING=true nodemon index.js
Now you can keep WSL2 instead of moving your environment to WSL1.

Where does '~' expand to when mounted in docker with windows subsystem for linux?

I have a docker container I wrote that sets up AWS profiles for me. In Linux it works great, on WSL it partially works.
When I run the container I am mounting the ~/.aws directory, checking if the profiles exist and if they don't exist I create them. If they do exist I don't do anything.
In Linux I can run this container and then continue to use aws-cli with no problems.
In Windows subsystem for Linux - when I run the container the first time around, it will create the profiles for me. If I choose to run the container again it sees that the profiles already exist so it does nothing. This tells me the file exists somewhere but I cant use aws-cli because the file doesn't exist at ~/.aws.
So my question is where is ~/.aws in WSL when mounted to a docker container? I've attempted to do a find on the entire filesystem in WSL and that returns nothing. I've also tried changing the mount path to /root/.aws and I run into the same conditions.
EDIT:
I still don't know the answer to my question above. But if anyone comes across this question I did find a work around.
I've updated Docker Desktop to allow mounting the entire c:/ drive. Then I just changed my docker run command to mount c:/.aws instead of ~/.aws, so my command looks like -v c:/.aws:/root/.aws. After that I added this environment variable in WSL export AWS_SHARED_CREDENTIALS_FILE="/mnt/c/.aws/credentials" and now aws cli picks up on my profile changes.
The shell always expands ~ to the value of the HOME environment variable. If that environment variable is not set, then it expands to nothing. If you want to find where ~/.aws is located, then you can write something like echo ~/.aws and the shell will expand it for you.
The only exception is that ~user expands to the home directory of the user user; the HOME environment variable is not consulted there.
You have to remember that in your setup the docker engine (docker for windows) is installed on windows, it is inside the windows environment that the docker command is 'launched'. So when you say use ~/.aws it looks in the windows file system for this location.
In windows ~ is a valid directory name (try mkdir ~ from a cmd prompt) so when you say map ~/.aws I'm unsure what actually gets created. maybe try searching your c drive for a folder called ~. There is no ~ shortcut in windows for the home folder, and if there was which home would it be? the home of the logged in windows user? or the home inside WSL?
To make this work in WSL you need to pass ~/.aws to wslpath like this:
➜ echo $(wslpath ~/.aws)
/mnt/c/home/damo/.aws
But this location is the path according to WSL not windows you need to do it twice with the -w flag the second time
➜ echo $(wslpath -w $(wslpath ~/.aws))
C:\home\damo\.aws
which would make your final docker command look like this:
docker run -it -v $(wslpath -w $(wslpath ~/.aws)):/root/.aws awsprofileprocessor:latest
With this you will now be telling docker for windows the windows path to put the mount
Please let me know if this works for you, I'm interested in how it turns out.

Install/Update cifs-utils before mount smb

I'm currently trying to get Vagrant to provision a working CentoOS7 image on Windows10, using Hyper-V. Vagrant 1.8.4, current latest.
I envcounter a problem where the provisioning fails and I need to workaround each time. The CentOS7 image is a minimal image and does not include cifs-utils, therefore the mount wont work. So, I need cifs-utils installed before mount.
Error:
==> default: Mounting SMB shared folders...
default: C:/Programs/vagrant_stuff/centos7 => /vagrant
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t cifs -o uid=`id -u vagrant`,gid=`getent group vagrant | cut -d: -f3`,sec=ntlm,credentials=/etc/smb_creds_4d99b2
d500a1bcb656d5a1c481a47191 //192.168.137.1/4d99b2d500a1bcb656d5a1c481a47191 /vagrant
mount -t cifs -o uid=`id -u vagrant`,gid=`id -g vagrant`,sec=ntlm,credentials=/etc/smb_creds_4d99b2d500a1bcb656d5a1c481a
47191 //192.168.137.1/4d99b2d500a1bcb656d5a1c481a47191 /vagrant
The error output from the last command was:
mount: wrong fs type, bad option, bad superblock on //192.168.137.1/4d99b2d500a1bcb656d5a1c481a47191,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so.
As it is now, the provisioning has to fail, and I need to:
vagrant ssh (powershell)
(connect to instance via putty/ssh)
sudo yum install cifs-utils -y (putty/ssh)
(wait for install...)
exit (putty/ssh)
vagrant reload --provision (powershell)
This is obviously a pain and I am trying to streamline the process.
Does anyone know a better way?
You can install the missing package in your box and repackage this box so you can distribute a new version of this box containing the missing package.
In order to provision a vagrant box you need to create it from an iso. While preparing the box you can install all needed packages for you. In your case it is Hyper-v - https://www.vagrantup.com/docs/hyperv/boxes.html
Best Regards
Apparently my original question was downvoted for some reason. #whatever
As I mentioned in one of the comments above:
I managed to repackage and upload an updated version. Thanks for the advice. Its available in Atlas as "KptnKMan/bluefhypervalphacentos7repack".
Special thanks to #frédéric-henri :)

Eject CD-ROM Drive after application Installation

I have an application in Linux that installs from a CD-ROM Device.
When the CD-ROM is inserted into the Drive, the autorun feature runs the installation script in an xterm window. Now when the installation is over, I do an 'exit 1' , and the xterm window prompts the user to 'press any key to close the window'. My problem is that I would like the script to eject the CD-ROM Drive after the Installation is over.
However since the installation script is still running from the CD-ROM Drive, the script cannot unmount the CD-ROM Drive and eject the Drive.
Could any please give me some idea of how the script could possibly eject the CD-Drive after Installation?
You could use a local installation script that refers to the installation files on the CD. That does mean your user will have to copy the file locally in order to start running the installation program.
Another option could be that your installation program could create the supplementary install file when the user starts the install process from CD.
Before the installer starts, copy the installer and an "eject CD" shell script to the /tmp directory on the Linux machine. Then, execute the installer from /tmp and create a shell script that executes, or find a way to execute, the following commands:
sudo umount /dev/cdrom
eject /dev/cdrom
Also, on some machines it's sudo umount /dev/sr0, but /dev/cdrom should work.

Change size of files on Windows machine from remote Linux machine

I need to change somehow size of files (increase or decrease) on Windows machine using bash scripts. (Content of files doesnt matter) But i have to run this scripts from remote linux machine. I've selected truncate command for size changing, this is exactly that i need, cause i need to change size exactly of chosen file, without changing it's descriptor. It is very important.
But i DO NOT have truncate on my linux machine and i CANNOT install it there (So dont tell me to install it there plz :)). I cannot install nothing on my linux machine it has specific kernel - this is a main option of all my problems.
So i've decided to install cygwin on my Windows machine, cause it has truncate command. Also i know that there are fallocate command, but my linux machine also doesnt have it, and cygwin doesnt have it too. So if there is some another command i wanna know it :)
Then after this steps i tried to change file size from cygwin terminal via truncate and all works perfectly. And the last problem that i had to solve was just run cygwin's bash from my remote linux, i've chosen winexe for that.
Finally the way that i've chosen is:
I run winexe command on my linux machine that runs:
winexe myHost "c:\cygwin\bin\bash.exe myScriptWithTRUNCATE"
on my Win machine.
But it doesnt work and i dunno why. truncate command doesnt change size of files at all. When i type
truncate --help
all works, i can see result of help option on my linux terminal, but e.g.
truncate -s someSize myFile
doesnt work, size of file doesnt change. Also error code from truncate -s someSize myFile is 0
Any suggestions?
try giving the name of your script that is "myScriptWithTRUNCATE" directly in winexe command .
example:-
winexe myHost "c:\cygwin\bin\bash.exe myScriptWithTRUNCATE"
also check debug log of winexe by modifying winexe command as :-
winexe -d 5 myHost "c:\cygwin\bin\bash.exe myScriptWithTRUNCATE"
see in this log what actually is sent over to windows as command in place for your script.
specifically see in " CTRL: sending command : run xxxxx"
see what "xxxxx" is in that debug log.
winexe gives you the control of windows command line(cmd.exe).
Try running you script after it has got control of cmd.exe.
Based on some findings above , try this link for more help
http://blog.dotsmart.net/2011/01/27/executing-cygwin-bash-scripts-on-windows/

Resources